WorldWideScience

Sample records for component analysis applied

  1. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  2. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  3. Creative design-by-analysis solutions applied to high-temperature components

    International Nuclear Information System (INIS)

    Dhalla, A.K.

    1993-01-01

    Elevated temperature design has evolved over the last two decades from design-by-formula philosophy of the ASME Boiler and Pressure Vessel Code, Sections I and VIII (Division 1), to the design-by-analysis philosophy of Section III, Code Case N-47. The benefits of design-by-analysis procedures, which were developed under a US-DOE-sponsored high-temperature structural design (HTSD) program, are illustrated in the paper through five design examples taken from two U.S. liquid metal reactor (LMR) plants. Emphasis in the paper is placed upon the use of a detailed, nonlinear finite element analysis method to understand the structural response and to suggest design optimization so as to comply with Code Case N-47 criteria. A detailed analysis is cost-effective, if selectively used, to qualify an LMR component for service when long-lead-time structural forgings, procured based upon simplified preliminary analysis, do not meet the design criteria, or the operational loads are increased after the components have been fabricated. In the future, the overall costs of a detailed analysis will be reduced even further with the availability of finite element software used on workstations or PCs

  4. Independent component analysis applied to long bunch beams in the Los Alamos Proton Storage Ring

    Science.gov (United States)

    Kolski, Jeffrey S.; Macek, Robert J.; McCrady, Rodney C.; Pang, Xiaoying

    2012-11-01

    Independent component analysis (ICA) is a powerful blind source separation (BSS) method. Compared to the typical BSS method, principal component analysis, ICA is more robust to noise, coupling, and nonlinearity. The conventional ICA application to turn-by-turn position data from multiple beam position monitors (BPMs) yields information about cross-BPM correlations. With this scheme, multi-BPM ICA has been used to measure the transverse betatron phase and amplitude functions, dispersion function, linear coupling, sextupole strength, and nonlinear beam dynamics. We apply ICA in a new way to slices along the bunch revealing correlations of particle motion within the beam bunch. We digitize beam signals of the long bunch at the Los Alamos Proton Storage Ring with a single device (BPM or fast current monitor) for an entire injection-extraction cycle. ICA of the digitized beam signals results in source signals, which we identify to describe varying betatron motion along the bunch, locations of transverse resonances along the bunch, measurement noise, characteristic frequencies of the digitizing oscilloscopes, and longitudinal beam structure.

  5. Independent component analysis applied to long bunch beams in the Los Alamos Proton Storage Ring

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Kolski

    2012-11-01

    Full Text Available Independent component analysis (ICA is a powerful blind source separation (BSS method. Compared to the typical BSS method, principal component analysis, ICA is more robust to noise, coupling, and nonlinearity. The conventional ICA application to turn-by-turn position data from multiple beam position monitors (BPMs yields information about cross-BPM correlations. With this scheme, multi-BPM ICA has been used to measure the transverse betatron phase and amplitude functions, dispersion function, linear coupling, sextupole strength, and nonlinear beam dynamics. We apply ICA in a new way to slices along the bunch revealing correlations of particle motion within the beam bunch. We digitize beam signals of the long bunch at the Los Alamos Proton Storage Ring with a single device (BPM or fast current monitor for an entire injection-extraction cycle. ICA of the digitized beam signals results in source signals, which we identify to describe varying betatron motion along the bunch, locations of transverse resonances along the bunch, measurement noise, characteristic frequencies of the digitizing oscilloscopes, and longitudinal beam structure.

  6. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  7. Applying independent component analysis to clinical fMRI at 7 T

    Directory of Open Access Journals (Sweden)

    Simon Daniel Robinson

    2013-09-01

    Full Text Available Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting and parallel imaging reconstruction errors. In this study, the ability of Independent Component Analysis (ICA to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activation with negligible contamination by motion effects. The results of General Linear Model (GLM analysis of these data were, in contrast, heavily contaminated by motion. Secondary motor areas, basal ganglia and thalamus involvement were apparent in ICA results, but there was low capability to isolate activation in the same brain regions in the GLM analysis, indicating that ICA was more sensitive as well as more specific. A method was developed to simplify the assessment of the large number of independent components. Task-related activation components could be automatically identified via intuitive and effective features. These findings demonstrate that ICA is a practical and sensitive analysis approach in high field fMRI studies, particularly where motion is evoked. Promising applications of ICA in clinical fMRI include presurgical planning and the study of pathologies affecting subcortical brain areas.

  8. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  9. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  10. Concept analysis of culture applied to nursing.

    Science.gov (United States)

    Marzilli, Colleen

    2014-01-01

    Culture is an important concept, especially when applied to nursing. A concept analysis of culture is essential to understanding the meaning of the word. This article applies Rodgers' (2000) concept analysis template and provides a definition of the word culture as it applies to nursing practice. This article supplies examples of the concept of culture to aid the reader in understanding its application to nursing and includes a case study demonstrating components of culture that must be respected and included when providing health care.

  11. Laser-induced breakdown spectroscopy applied to the characterization of rock by support vector machine combined with principal component analysis

    International Nuclear Information System (INIS)

    Yang Hong-Xing; Fu Hong-Bo; Wang Hua-Dong; Jia Jun-Wei; Dong Feng-Zhong; Sigrist, Markus W

    2016-01-01

    Laser-induced breakdown spectroscopy (LIBS) is a versatile tool for both qualitative and quantitative analysis. In this paper, LIBS combined with principal component analysis (PCA) and support vector machine (SVM) is applied to rock analysis. Fourteen emission lines including Fe, Mg, Ca, Al, Si, and Ti are selected as analysis lines. A good accuracy (91.38% for the real rock) is achieved by using SVM to analyze the spectroscopic peak area data which are processed by PCA. It can not only reduce the noise and dimensionality which contributes to improving the efficiency of the program, but also solve the problem of linear inseparability by combining PCA and SVM. By this method, the ability of LIBS to classify rock is validated. (paper)

  12. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  13. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  14. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  15. Applying independent component analysis to clinical fMRI at 7 T

    OpenAIRE

    Simon Daniel Robinson; Veronika eSchöpf; Pedro eCardoso; Alexander eGeissler; Alexander eGeissler; Florian Ph.S Fischmeister; Florian Ph.S Fischmeister; Moritz eWurnig; Moritz eWurnig; Siegfried eTrattnig; Roland eBeisteiner; Roland eBeisteiner

    2013-01-01

    Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting and parallel imaging reconstruction errors. In this study, the ability of Independent Component Analysis (ICA) to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activati...

  16. Applying independent component analysis to clinical FMRI at 7 t

    OpenAIRE

    Robinson, Simon Daniel; Schöpf, Veronika; Cardoso, Pedro; Geissler, Alexander; Fischmeister, Florian P S; Wurnig, Moritz; Trattnig, Siegfried; Beisteiner, Roland

    2013-01-01

    Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting, and parallel imaging reconstruction errors. In this study, the ability of independent component analysis (ICA) to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activat...

  17. Lessons learned in applying function analysis

    International Nuclear Information System (INIS)

    Mitchel, G.R.; Davey, E.; Basso, R.

    2001-01-01

    This paper summarizes the lessons learned in undertaking and applying function analysis based on the recent experience of utility, AECL and international design and assessment projects. Function analysis is an analytical technique that can be used to characterize and asses the functions of a system and is widely recognized as an essential component of a 'systematic' approach to design, on that integrated operational and user requirements into the standard design process. (author)

  18. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Science.gov (United States)

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  19. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    Science.gov (United States)

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  20. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Directory of Open Access Journals (Sweden)

    Wensheng Dai

    2014-01-01

    Full Text Available Sales forecasting is one of the most important issues in managing information technology (IT chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR, is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA, temporal ICA (tICA, and spatiotemporal ICA (stICA to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  1. APPLYING PRINCIPAL COMPONENT ANALYSIS, MULTILAYER PERCEPTRON AND SELF-ORGANIZING MAPS FOR OPTICAL CHARACTER RECOGNITION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-11-01

    Full Text Available Optical Character Recognition plays an important role in data storage and data mining when the number of documents stored as images is increasing. It is expected to find the ways to convert images of typewritten or printed text into machine-encoded text effectively in order to support for the process of information handling effectively. In this paper, therefore, the techniques which are being used to convert image into editable text in the computer such as principal component analysis, multilayer perceptron network, self-organizing maps, and improved multilayer neural network using principal component analysis are experimented. The obtained results indicated the effectiveness and feasibility of the proposed methods.

  2. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  3. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  5. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  6. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  7. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  8. Simplified seismic analysis applied to structures systems and components with limited radioactive inventories

    International Nuclear Information System (INIS)

    Stevenson, J.D.

    1989-01-01

    This paper presents a review of the current status of simplified methods of seismic design and analysis applicable to nuclear facility structures, systems and components important to public health and safety. In particular, the International Atomic Energy Agency, IAEA TEC DOC 348 procedure for structures and the Bounding Spectra Concept for equipment as being developed by Seismic Qualification Utility Group and the Electric Power Research Institute will be discussed in some detail

  9. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  10. Cloud Masking for Remotely Sensed Data Using Spectral and Principal Components Analysis

    Directory of Open Access Journals (Sweden)

    A. Ahmad

    2012-06-01

    Full Text Available Two methods of cloud masking tuned to tropical conditions have been developed, based on spectral analysis and Principal Components Analysis (PCA of Moderate Resolution Imaging Spectroradiometer (MODIS data. In the spectral approach, thresholds were applied to four reflective bands (1, 2, 3, and 4, three thermal bands (29, 31 and 32, the band 2/band 1 ratio, and the difference between band 29 and 31 in order to detect clouds. The PCA approach applied a threshold to the first principal component derived from the seven quantities used for spectral analysis. Cloud detections were compared with the standard MODIS cloud mask, and their accuracy was assessed using reference images and geographical information on the study area.

  11. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

    National Research Council Canada - National Science Library

    Qi, Yuan

    2000-01-01

    In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

  12. Statistical techniques applied to aerial radiometric surveys (STAARS): principal components analysis user's manual

    International Nuclear Information System (INIS)

    Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.

    1981-01-01

    A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From this analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained

  13. Multi-component separation and analysis of bat echolocation calls.

    Science.gov (United States)

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  14. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    .) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...

  15. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  16. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  17. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  18. COMPARING INDEPENDENT COMPONENT ANALYSIS WITH PRINCIPLE COMPONENT ANALYSIS IN DETECTING ALTERATIONS OF PORPHYRY COPPER DEPOSIT (CASE STUDY: ARDESTAN AREA, CENTRAL IRAN

    Directory of Open Access Journals (Sweden)

    S. Mahmoudishadi

    2017-09-01

    Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  19. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    Science.gov (United States)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  20. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  1. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  2. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  3. A comparison of response spectrum and direct integration analysis methods as applied to a nuclear component support structure

    International Nuclear Information System (INIS)

    Bryan, B.J.; Flanders, H.E. Jr.

    1992-01-01

    Seismic qualification of Class I nuclear components is accomplished using a variety of analytical methods. This paper compares the results of time history dynamic analyses of a heat exchanger support structure using response spectrum and time history direct integration analysis methods. Dynamic analysis is performed on the detailed component models using the two methods. A nonlinear elastic model is used for both the response spectrum and direct integration methods. A nonlinear model which includes friction and nonlinear springs, is analyzed using time history input by direct integration. The loads from the three cases are compared

  4. Independent component analysis: recent advances

    OpenAIRE

    Hyv?rinen, Aapo

    2013-01-01

    Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...

  5. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  6. Review on characterization methods applied to HTR-fuel element components

    International Nuclear Information System (INIS)

    Koizlik, K.

    1976-02-01

    One of the difficulties which on the whole are of no special scientific interest, but which bear a lot of technical problems for the development and production of HTR fuel elements is the proper characterization of the element and its components. Consequently a lot of work has been done during the past years to develop characterization procedures for the fuel, the fuel kernel, the pyrocarbon for the coatings, the matrix and graphite and their components binder and filler. This paper tries to give a status report on characterization procedures which are applied to HTR fuel in KFA and cooperating institutions. (orig.) [de

  7. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  8. The RAGE Game Software Components Repository for Supporting Applied Game Development

    Directory of Open Access Journals (Sweden)

    Krassen Stefanov

    2017-09-01

    Full Text Available This paper presents the architecture of the RAGE repository, which is a unique and dedicated infrastructure that provides access to a wide variety of advanced technology components for applied game development. The RAGE project, which is the principal Horizon2020 research and innovation project on applied gaming, develops up to three dozens of software components (RAGE software assets that are reusable across a wide diversity of game engines, game platforms and programming languages. The RAGE repository provides storage space for assets and their artefacts and is designed as an asset life-cycle management system for defining, publishing, updating, searching and packaging for distribution of these assets. It will be embedded in a social platform for asset developers and other users. A dedicated Asset Repository Manager provides the main functionality of the repository and its integration with other systems. Tools supporting the Asset Manager are presented and discussed. When the RAGE repository is in full operation, applied game developers will be able to easily enhance the quality of their games by including selected advanced game software assets. Making available the RAGE repository system and its variety of software assets aims to enhance the coherence and decisiveness of the applied game industry.

  9. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  10. Visualizing solvent mediated phase transformation behavior of carbamazepine polymorphs by principal component analysis

    DEFF Research Database (Denmark)

    Tian, Fang; Rades, Thomas; Sandler, Niklas

    2008-01-01

    The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...

  11. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  12. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  13. Path and correlation analysis of perennial ryegrass (Lolium perenne L.) seed yield components

    DEFF Research Database (Denmark)

    Abel, Simon; Gislum, René; Boelt, Birte

    2017-01-01

    Maximum perennial ryegrass seed production potential is substantially greater than harvested yields with harvested yields representing only 20% of calculated potential. Similar to wheat, maize and other agriculturally important crops, seed yield is highly dependent on a number of interacting seed...... yield components. This research was performed to apply and describe path analysis of perennial ryegrass seed yield components in relation to harvested seed yields. Utilising extensive yield components which included subdividing reproductive inflorescences into five size categories, path analysis...... was undertaken assuming a unidirectional causal-admissible relationship between seed yield components and harvested seed yield in six commercial seed production fields. Both spikelets per inflorescence and florets per spikelet had a significant (p seed yield; however, total...

  14. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  15. Efficient real time OD matrix estimation based on principal component analysis

    NARCIS (Netherlands)

    Djukic, T.; Flötteröd, G.; Van Lint, H.; Hoogendoorn, S.P.

    2012-01-01

    In this paper we explore the idea of dimensionality reduction and approximation of OD demand based on principal component analysis (PCA). First, we show how we can apply PCA to linearly transform the high dimensional OD matrices into the lower dimensional space without significant loss of accuracy.

  16. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  17. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  18. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    Science.gov (United States)

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  19. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  20. Applying of component system development in object methodology

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    -oriented methodology (Arlo, Neust, 2007, (Kan, Müller, 2005, (​​Krutch, 2003 for problem domains with double-layer process logic. There is indicated an integration method, based on a certain meta-model (Applying of the Component system Development in object Methodology and leading to the component system formation. The mentioned meta-model is divided into partial workflows that are located in different stages of a classic object process-based methodology. Into account there are taken the consistency of the input and output artifacts in working practices of the meta-model and mentioned object methodology. This paper focuses on static component systems that are starting to explore dynamic and mobile component systems.In addition, in the contribution the component system is understood as a specific system, for its system properties and basic terms notation being used a set and graph and system algebra.

  1. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  2. Understanding Oral Reading Fluency among Adults with Low Literacy: Dominance Analysis of Contributing Component Skills

    Science.gov (United States)

    Mellard, Daryl F.; Anthony, Jason L.; Woods, Kari L.

    2012-01-01

    This study extends the literature on the component skills involved in oral reading fluency. Dominance analysis was applied to assess the relative importance of seven reading-related component skills in the prediction of the oral reading fluency of 272 adult literacy learners. The best predictors of oral reading fluency when text difficulty was…

  3. Applying of component system development in object methodology, case study

    Directory of Open Access Journals (Sweden)

    Milan Mišovič

    2013-01-01

    Full Text Available To create computarization target software as a component system has been a very strong requirement for the last 20 years of software developing. Finally, the architectural components are self-contained units, presenting not only partial and overall system behavior, but also cooperating with each other on the basis of their interfaces. Among others, components have allowed flexible modification of processes the behavior of which is the foundation of components behavior without changing the life of the component system. On the other hand, the component system makes it possible, at design time, to create numerous new connections between components and thus creating modified system behaviors. This all enables the company management to perform, at design time, required behavioral changes of processes in accordance with the requirements of changing production and market.The development of software which is generally referred to as SDP (Software Development Process contains two directions. The first one, called CBD (Component–Based Development, is dedicated to the development of component–based systems CBS (Component–based System, the second target is the development of software under the influence of SOA (Service–Oriented Architecture. Both directions are equipped with their different development methodologies. The subject of this paper is only the first direction and application of development of component–based systems in its object–oriented methodologies. The requirement of today is to carry out the development of component-based systems in the framework of developed object–oriented methodologies precisely in the way of a dominant style. In some of the known methodologies, however, this development is not completely transparent and is not even recognized as dominant. In some cases, it is corrected by the special meta–integration models of component system development into an object methodology.This paper presents a case study

  4. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  5. Representation for dialect recognition using topographic independent component analysis

    Science.gov (United States)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  6. Evaluating the Impact of Conservatism in Industrial Fatigue Analysis of Life-Limited Components

    Directory of Open Access Journals (Sweden)

    Hoole Joshua

    2018-01-01

    Full Text Available This paper presents a review of the conservatism approaches applied by different industrial sectors to the stress-life (S-N analysis of ‘life-limited’ or ‘safe-life’ components. A comparison of the fatigue design standards for 6 industrial sectors identified that the conservatism approaches are highly inconsistent when comparing the areas of variability and uncertainty accounted for along with the conservatism magnitude and method of application. Through the use of a case-study based on the SAE keyhole benchmark and 4340 steel S-N data, the industrial sector which introduces the greatest reduction of a component life-limit was identified as the nuclear sector. The results of the case-study also highlighted that conservatism applied to account for scatter in S-N data currently provides the greatest contribution to the reduction of component life-limits.

  7. Task-related component analysis for functional neuroimaging and application to near-infrared spectroscopy data.

    Science.gov (United States)

    Tanaka, Hirokazu; Katura, Takusige; Sato, Hiroki

    2013-01-01

    Reproducibility of experimental results lies at the heart of scientific disciplines. Here we propose a signal processing method that extracts task-related components by maximizing the reproducibility during task periods from neuroimaging data. Unlike hypothesis-driven methods such as general linear models, no specific time courses are presumed, and unlike data-driven approaches such as independent component analysis, no arbitrary interpretation of components is needed. Task-related components are constructed by a linear, weighted sum of multiple time courses, and its weights are optimized so as to maximize inter-block correlations (CorrMax) or covariances (CovMax). Our analysis method is referred to as task-related component analysis (TRCA). The covariance maximization is formulated as a Rayleigh-Ritz eigenvalue problem, and corresponding eigenvectors give candidates of task-related components. In addition, a systematic statistical test based on eigenvalues is proposed, so task-related and -unrelated components are classified objectively and automatically. The proposed test of statistical significance is found to be independent of the degree of autocorrelation in data if the task duration is sufficiently longer than the temporal scale of autocorrelation, so TRCA can be applied to data with autocorrelation without any modification. We demonstrate that simple extensions of TRCA can provide most distinctive signals for two tasks and can integrate multiple modalities of information to remove task-unrelated artifacts. TRCA was successfully applied to synthetic data as well as near-infrared spectroscopy (NIRS) data of finger tapping. There were two statistically significant task-related components; one was a hemodynamic response, and another was a piece-wise linear time course. In summary, we conclude that TRCA has a wide range of applications in multi-channel biophysical and behavioral measurements. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Portable XRF and principal component analysis for bill characterization in forensic science

    International Nuclear Information System (INIS)

    Appoloni, C.R.; Melquiades, F.L.

    2014-01-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. - Highlights: • The paper is about a direct method for bills discrimination by EDXRF and principal component analysis. • The bills are analyzed directly, without sample preparation and non destructively. • The results demonstrates that the methodology is feasible and could be applied in forensic science for identification of origin and false banknotes. • The novelty is that portable EDXRF is very fast and efficient for bills characterization

  9. Handbook of Applied Analysis

    CERN Document Server

    Papageorgiou, Nikolaos S

    2009-01-01

    Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.

  10. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  11. Applied longitudinal analysis

    CERN Document Server

    Fitzmaurice, Garrett M; Ware, James H

    2012-01-01

    Praise for the First Edition "". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis.""-Journal of the American Statistical Association   Features newly developed topics and applications of the analysis of longitudinal data Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of lo

  12. Risk-informed importance analysis of in-service testing components for Ulchin units 3 and 4

    International Nuclear Information System (INIS)

    Kang, D. I.; Kim, K. Y.; Ha, J. J.

    2001-01-01

    In this paper, we perform risk-informed importance analysis of in-service tesing (IST) components for Ulchin Units 3 and 4. The importance analysis using PSA is performed through Level 1 internal and external, shutdown/low power operation, and Level 2 internal PSA. The sensitivity analysis is also performed. For the components not modeled in PSA logic, we develop and apply a new integrated importance analysis method. The importance analysis results for IST valves show that 167 (26.55%) of 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. The importance analysis results for IST pumps show that 28 (70%) of 40 IST pumps are HSSCs and 12 (30%) are KSSCs

  13. Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions

    Science.gov (United States)

    BoŻek, Piotr

    2018-03-01

    The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.

  14. Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis

    Science.gov (United States)

    Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi

    A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.

  15. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  16. [Principal component analysis and cluster analysis of inorganic elements in sea cucumber Apostichopus japonicus].

    Science.gov (United States)

    Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie

    2011-11-01

    The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.

  17. Application of empirical orthogonal functions or principal component analysis to environmental variability data

    International Nuclear Information System (INIS)

    Carvajal Escobar, Yesid; Marco Segura, Juan B

    2005-01-01

    An EOF analysis or principal component analysis (PC) was made for monthly precipitation (1972-1998) using 50 stations, and for monthly rate of flow (1951-2000) at 8 stations in the Valle del Cauca state, Colombia. Previously, we had applied 5 measures in order to verify the convenience of the analysis. These measures were: i) evaluation of significance level of correlation between variables; II) the kaiser-Meyer-Oikin (KMO) test; III) the Bartlett sphericity test; (IV) the measurement of sample adequacy (MSA), and v) the percentage of non-redundant residues with absolute values>0.05. For the selection of the significant PCS in every set of variables we applied seven criteria: the graphical method, the explained variance percentage, the mean root, the tests of Velicer, Bartlett, Broken Stich and the cross validation test. We chose the latter as the best one. It is robust and quantitative. Precipitation stations were divided in three homogeneous groups, applying a hierarchical cluster analysis, which was verified through the geographic method and the discriminate analysis for the first four EOFs of precipitation. There are many advantages to the EOF method: reduction of the dimensionality of multivariate data, calculation of missing data, evaluation and reduction of multi-co linearity, building of homogeneous groups, and detection of outliers. With the first four principal components we can explain 60.34% of the total variance of monthly precipitation for the Valle del Cauca state, and 94% of the total variance for the selected records of rates of flow

  18. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  19. Application of principal component analysis to time series of daily air pollution and mortality

    NARCIS (Netherlands)

    Quant C; Fischer P; Buringh E; Ameling C; Houthuijs D; Cassee F; MGO

    2004-01-01

    We investigated whether cause-specific daily mortality can be attributed to specific sources of air pollution. To construct indicators of source-specific air pollution, we applied a principal component analysis (PCA) on routinely collected air pollution data in the Netherlands during the period

  20. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  1. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  2. Dynamic analysis of the radiolysis of binary component system

    International Nuclear Information System (INIS)

    Katayama, M.; Trumbore, C.N.

    1975-01-01

    Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)

  3. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  4. The Components of Income Inequality in Belgium : Applying the Shorrocks-Decomposition with Bootstrapping

    NARCIS (Netherlands)

    Dekkers, G.J.M.; Nelissen, J.H.M.

    2001-01-01

    We look at the contribution of various income components on income inequality and the changes in this in Belgium.Starting from the Shorrocks decomposition, we apply bootstrapping to construct confidence intervals for both the annual decomposition and the changes over time.It appears that the

  5. Application of Principal Component Analysis in Prompt Gamma Spectra for Material Sorting

    Energy Technology Data Exchange (ETDEWEB)

    Im, Hee Jung; Lee, Yun Hee; Song, Byoung Chul; Park, Yong Joon; Kim, Won Ho

    2006-11-15

    For the detection of illicit materials in a very short time by comparing unknown samples' gamma spectra to pre-programmed material signatures, we at first, selected a method to reduce the noise of the obtained gamma spectra. After a noise reduction, a pattern recognition technique was applied to discriminate the illicit materials from the innocuous materials in the noise reduced data. Principal component analysis was applied for a noise reduction and pattern recognition in prompt gamma spectra. A computer program for the detection of illicit materials based on PCA method was developed in our lab and can be applied to the PGNAA system for the baggage checking at all ports of entry at a very short time.

  6. Towards intelligent video understanding applied to plasma facing component monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Martin, V.; Bremond, F. [INRIA, Pulsa team-project, Sophia Antipolis (France); Travere, J.M. [CEA IRFM, Saint Paul-lez-Durance (France); Moncada, V.; Dunand, G. [Sophia Conseil Company, Sophia Antipolis (France)

    2011-07-01

    Infrared thermography has become a routine diagnostic in many magnetic fusion devices to monitor the heat loads on the plasma facing components (PFCs) for both physics studies and machine protection. The good results of the developed systems obtained so far motivate the use of imaging diagnostics for control, especially during long pulse tokamak operation (e.g. lasting several minutes). In this paper, we promote intelligent monitoring for both real-time purposes (machine protection issues) and post event analysis purposes (PWI understanding). We propose a vision-based system able to automatically detect and classify into different pre-defined categories phenomena as localized hot spots, transient thermal events (e.g. electrical arcing), and unidentified flying objects (UFOs) as dusts from infrared imaging data of PFCs. This original vision system is made intelligent by endowing it with high-level reasoning (i.e. integration of a priori knowledge of thermal event spatial and temporal properties to guide the recognition), self-adaptability to varying conditions (e.g. different plasma scenarios), and learning capabilities (e.g. statistical modelling of thermal event behaviour based on training samples). This approach has been already successfully applied to the recognition of one critical thermal event at Tore Supra. We present here latest results of its extension for the recognition of others thermal events (e.g., B{sub 4}C flakes, impact of fast particles, UFOs) and show how extracted information can be used during plasma operation at Tore Supra to improve the real time control system, and for further analysis of PFC aging. This document is composed of an abstract followed by the slides of the presentation. (authors)

  7. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    Science.gov (United States)

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach

  8. Comparison of Principal Component Analysis and Linear Discriminant Analysis applied to classification of excitation-emission matrices of the selected biological material

    Directory of Open Access Journals (Sweden)

    Maciej Leśkiewicz

    2016-03-01

    Full Text Available Quality of two linear methods (PCA and LDA applied to reduce dimensionality of feature analysis is compared and efficiency of their algorithms in classification of the selected biological materials according to their excitation-emission fluorescence matrices is examined. It has been found that LDA method reduces the dimensions (or a number of significant variables more effectively than PCA method. A relatively good discrimination within the examined biological material has been obtained with the use of LDA algorithm.[b]Keywords[/b]: Feature Analysis, Fluorescence Spectroscopy, Biological Material Classification

  9. Project of integrity assessment of flawed components with structural discontinuity (IAF). Data book for residual stress analysis in weld joint. Analysis model of dissimilar metal weld joint applied post weld heat treatment (PWHT)

    International Nuclear Information System (INIS)

    2012-12-01

    The project of Integrity Assessment of Flawed Components with Structural Discontinuity (IAF) was entrusted to Japan Power Engineering and Inspection Corporation (JAPEIC) from Nuclear and Industrial Safety Agency (NISA) and started from FY 2001. And then, it was taken over to Japan Nuclear Energy Safety Organization (JNES) which was established in October 2003 and carried out until FY 2007. In the IAF project, weld joints between nickel based alloys and low alloy steels around penetrations in reactor vessel, safe-end of nozzles and shroud supports were selected from among components and pipe arrangements in nuclear power plants, where high residual stresses were generated due to welding and complex structure. Residual stresses around of the weld joints were estimated by finite element analysis method (FEM) with a general modeling method, then the reasonability and the conservativeness was evaluated. In addition, for postulated surface crack of stress corrosion cracking (SCC), a simple calculation method of stress intensity factor (K) required to estimate the crack growth was proposed and the effectiveness was confirmed. JNES compiled results of the IAF project into Data Books of Residual Stress Analysis of Weld Joint, and Data Book of Simplified Stress Intensity Factor Calculation for Penetration of Reactor as typical Structure Discontinuity, respectively. Data Books of Residual Stress Analysis in Weld Joint. 1. Butt Weld Joint of Small Diameter Cylinder (4B Sch40) (JNES-RE-2012-0005), 2. Dissimilar Metal Weld Joint in Safe End (One-Side Groove Joint (JNES-RE-2012-0006), 3. Dissimilar Metal Weld Joint in Safe End (Large Diameter Both-Side Groove Joint) (JNES-RE-2012-0007), 4. Weld Joint around Penetrations in Reactor Vessel (Insert Joint) (JNES-RE-2012-0008), 5. Weld Joint in Shroud Support (H8, H9, H10 and H11 Welds) (JNES-RE-2012-0009), 6. Analysis Model of Dissimilar Metal Weld Joint Applied Post Weld Heat Treatment (PWHT) (JNES-RE-2012-0010). Data Book of

  10. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  11. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  12. Development of an Automated LIBS Analytical Test System Integrated with Component Control and Spectrum Analysis Capabilities

    International Nuclear Information System (INIS)

    Ding Yu; Tian Di; Chen Feipeng; Chen Pengfei; Qiao Shujun; Yang Guang; Li Chunsheng

    2015-01-01

    The present paper proposes an automated Laser-Induced Breakdown Spectroscopy (LIBS) analytical test system, which consists of a LIBS measurement and control platform based on a modular design concept, and a LIBS qualitative spectrum analysis software and is developed in C#. The platform provides flexible interfacing and automated control; it is compatible with different manufacturer component models and is constructed in modularized form for easy expandability. During peak identification, a more robust peak identification method with improved stability in peak identification has been achieved by applying additional smoothing on the slope obtained by calculation before peak identification. For the purpose of element identification, an improved main lines analysis method, which detects all elements on the spectral peak to avoid omission of certain elements without strong spectral lines, is applied to element identification in the tested LIBS samples. This method also increases the identification speed. In this paper, actual applications have been carried out. According to tests, the analytical test system is compatible with components of various models made by different manufacturers. It can automatically control components to get experimental data and conduct filtering, peak identification and qualitative analysis, etc. on spectral data. (paper)

  13. Sensitivity Analysis on Elbow Piping Components in Seismically Isolated NPP under Seismic Loading

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Hee Kun; Hahm, Dae Gi; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Jeon, Bub Gyu; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    In this study, the FE model is verified using specimen test results and simulation with parameter variations are conducted. Effective parameters will randomly sampled and used as input values for simulations to be applied to the fragility analysis. pipelines are representative of them because they could undergo larger displacements when they are supported on both isolated and non-isolated structures simultaneously. Especially elbows are critical components of pipes under severed loading conditions such as earthquake action because strain is accumulated on them during the repeated bending of the pipe. Therefore, seismic performance of pipe elbow components should be examined thoroughly based on the fragility analysis. Fragility assessment of interface pipe should take different sources of uncertainty into account. However, selection of important sources and repeated tests with many random input values are very time consuming and expensive, so numerical analysis is commonly used. In the present study, finite element (FE) model of elbow component will be validated using the dynamic test results of elbow components. Using the verified model, sensitivity analysis will be implemented as a preliminary process of seismic fragility of piping system. Several important input parameters are selected and how the uncertainty of them are apportioned to the uncertainty of the elbow response is to be studied. Piping elbows are critical components under cyclic loading conditions as they are subjected large displacement. In a seismically isolated NPP, seismic capacity of piping system should be evaluated with caution. Seismic fragility assessment preliminarily needs parameter sensitivity analysis about the output of interest with different input parameter values.

  14. A first application of independent component analysis to extracting structure from stock returns.

    Science.gov (United States)

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  15. Applying HAZOP analysis in assessing remote handling compatibility of ITER port plugs

    International Nuclear Information System (INIS)

    Duisings, L.P.M.; Til, S. van; Magielsen, A.J.; Ronden, D.M.S.; Elzendoorn, B.S.Q.; Heemskerk, C.J.M.

    2013-01-01

    Highlights: ► We applied HAZOP analysis to assess the criticality of remote handling maintenance activities on port plugs in the ITER Hot Cell facility. ► We identified several weak points in the general upper port plug maintenance concept. ► We made clear recommendations on redesign in port plug design, operational sequence and Hot Cell equipment. ► The use of a HAZOP approach for the ECH UL port can also be applied to ITER port plugs in general. -- Abstract: This paper describes the application of a Hazard and Operability Analysis (HAZOP) methodology in assessing the criticality of remote handling maintenance activities on port plugs in the ITER Hot Cell facility. As part of the ECHUL consortium, the remote handling team at the DIFFER Institute is developing maintenance tools and procedures for critical components of the ECH Upper launcher (UL). Based on NRG's experience with nuclear risk analysis and Hot Cell procedures, early versions of these tool concepts and maintenance procedures were subjected to a HAZOP analysis. The analysis identified several weak points in the general upper port plug maintenance concept and led to clear recommendations on redesigns in port plug design, the operational sequence and ITER Hot Cell equipment. The paper describes the HAZOP methodology and illustrates its application with specific procedures: the Steering Mirror Assembly (SMA) replacement and the exchange of the Mid Shield Optics (MSO) in the ECH UPL. A selection of recommended changes to the launcher design associated with the accessibility, maintainability and manageability of replaceable components are presented

  16. Individual differences in anxiety responses to stressful situations : A three-mode component analysis model

    NARCIS (Netherlands)

    Van Mechelen, Iven; Kiers, Henk A.L.

    1999-01-01

    The three-mode component analysis model is discussed as a tool for a contextualized study of personality. When applied to person x situation x response data, the model includes sets of latent dimensions for persons, situations, and responses as well as a so-called core array, which may be considered

  17. Finger crease pattern recognition using Legendre moments and principal component analysis

    Science.gov (United States)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  18. Thermal analysis of the first canted-undulator front-end components at SSRF

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhongmin, E-mail: xuzhongmin@sinap.ac.cn; Feng, Xinkang; Wang, Naxiu; Wu, Guanyuan; Zhang, Min; Wang, Jie

    2015-02-21

    The performance of three kinds of masks: pre-mask, splitter mask and fixed mask-photon shutter, used for the first canted-undulator front end under heat loads at SSRF, is studied. Because these components are shared with two beamlines, the X-rays from both dual undulators and bending magnets can strike on them. Under these complicated conditions, they will absorb much more thermal power than when they operate in usual beamline. So thermal and stress analysis is indispensable for their mechanical design. The method of applying the non-uniform power density using Ansys is presented. During thermal stress analysis, the normal operation or the worst possible case is considered. The finite element analyses results, such as the maximum temperature of the body and the cooling wall and the maximum stress of these components, show the design of them is reasonable and safe.

  19. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    Science.gov (United States)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  20. Development of guidelines for inelastic analysis in design of fast reactor components

    International Nuclear Information System (INIS)

    Nakamura, Kyotada; Kasahara, Naoto; Morishita, Masaki; Shibamoto, Hiroshi; Inoue, Kazuhiko; Nakayama, Yasunari

    2008-01-01

    The interim guidelines for the application of inelastic analysis to design of fast reactor components were developed. These guidelines are referred from 'Elevated Temperature Structural Design Guide for Commercialized Fast Reactor (FDS)'. The basic policies of the guidelines are more rational predictions compared with elastic analysis approach and a guarantee of conservative results for design conditions. The guidelines recommend two kinds of constitutive equations to estimate strains conservatively. They also provide the methods for modeling load histories and estimating fatigue and creep damage based on the results of inelastic analysis. The guidelines were applied to typical design examples and their results were summarized as exemplars to support users

  1. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  2. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  3. ARP1400 DVI break analysis using the MARS 3.1 multi-D component

    International Nuclear Information System (INIS)

    Hwang, Moon-Kyu; Lim, Hong-Sik; Lee, Seung-Wook; Bae, Sung-Won; Chung, Bub-Dong

    2006-01-01

    The current version of MARS 3.1 has a multi-D component intended to simulate an asymmetric multidimensional fluid behavior in a reactor core, downcomer or in a steam generator, in a more realistic manner. The feature is implemented in the 1-D module of the code. As opposed to the cross flow junction modeling, the multi-D component allows for a lateral momentum transfer as well as a sheer stress. Thus, a full three-Dimensional analysis capability is available as in the case of RELAP5-3D or CATHARE. In this study the multi-D component is applied to the hypothetical accident of a DVI (Direct Vessel Injection) break in the APR1400 plant, and the results are analyzed

  4. Fatigue Analysis of Tubesheet/Shell Juncture Applying the Mitigation Factor for Over-conservatism

    International Nuclear Information System (INIS)

    Kang, Deog Ji; Kim, Kyu Hyoung; Lee, Jae Gon

    2009-01-01

    If the environmental fatigue requirements are applied to the primary components of a nuclear power plant, to which the present ASME Code fatigue curves are applied, some locations with high level CUF (Cumulative Usage Factor) are anticipated not to meet the code criteria. The application of environmental fatigue damage is still particularly controversial for plants with 60-year design lives. Therefore, it is need to develop a detailed fatigue analysis procedure to identify the conservatisms in the procedure and to lower the cumulative usage factor. Several factors are being considered to mitigate the conservatism such as three-dimensional finite element modeling. In the present analysis, actual pressure transient data instead of conservative maximum and minimum pressure data was applied as one of mitigation factors. Unlike in the general method, individual transient events were considered instead of the grouped transient events. The tubesheet/shell juncture in the steam generator assembly is the one of the weak locations and was, therefore, selected as a target to evaluate the mitigation factor in the present analysis

  5. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  6. Applied Behavior Analysis

    Science.gov (United States)

    Szapacs, Cindy

    2006-01-01

    Teaching strategies that work for typically developing children often do not work for those diagnosed with an autism spectrum disorder. However, teaching strategies that work for children with autism do work for typically developing children. In this article, the author explains how the principles and concepts of Applied Behavior Analysis can be…

  7. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  8. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  9. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution

    Directory of Open Access Journals (Sweden)

    Xiao-Liang Feng

    2013-01-01

    Full Text Available Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  10. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution.

    Science.gov (United States)

    Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  11. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting model equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method. The vectors of maximum system displacements as computed by the time history analysis and the five response spectrum analysis methods are presented. (Auth.)

  12. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  13. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    Science.gov (United States)

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    Science.gov (United States)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  15. Application of independent component analysis to H-1 MR spectroscopic imaging exams of brain tumours

    NARCIS (Netherlands)

    Szabo de Edelenyi, F.; Simonetti, A.W.; Postma, G.; Huo, R.; Buydens, L.M.C.

    2005-01-01

    The low spatial resolution of clinical H-1 MRSI leads to partial volume effects. To overcome this problem, we applied independent component analysis (ICA) on a set of H-1 MRSI exams of brain turnours. With this method, tissue types that yield statistically independent spectra can be separated. Up to

  16. Applied analysis and differential equations

    CERN Document Server

    Cârj, Ovidiu

    2007-01-01

    This volume contains refereed research articles written by experts in the field of applied analysis, differential equations and related topics. Well-known leading mathematicians worldwide and prominent young scientists cover a diverse range of topics, including the most exciting recent developments. A broad range of topics of recent interest are treated: existence, uniqueness, viability, asymptotic stability, viscosity solutions, controllability and numerical analysis for ODE, PDE and stochastic equations. The scope of the book is wide, ranging from pure mathematics to various applied fields such as classical mechanics, biomedicine, and population dynamics.

  17. Social network analysis applied to team sports analysis

    CERN Document Server

    Clemente, Filipe Manuel; Mendes, Rui Sousa

    2016-01-01

    Explaining how graph theory and social network analysis can be applied to team sports analysis, This book presents useful approaches, models and methods that can be used to characterise the overall properties of team networks and identify the prominence of each team player. Exploring the different possible network metrics that can be utilised in sports analysis, their possible applications and variances from situation to situation, the respective chapters present an array of illustrative case studies. Identifying the general concepts of social network analysis and network centrality metrics, readers are shown how to generate a methodological protocol for data collection. As such, the book provides a valuable resource for students of the sport sciences, sports engineering, applied computation and the social sciences.

  18. Components selection for ageing management

    International Nuclear Information System (INIS)

    Mingiuc, C.; Vidican, D.

    2002-01-01

    Full text: The paper presents a synthesis of methods and activities realized for the selection of critical components to assure plant safety and availability (as electricity supplier). There are presented main criteria for selection, screening process. For the resulted categories of components shall be applied different category of maintenance (condition oriented, scheduled or corrective), function of the importance and financial effort necessary to fulfil the task. 1. Systems and components screening for plant safety assurance For the systems selection, from Safety point of view, was necessary first, to define systems which are dangerous in case of failure (mainly by rupture/ release of radioactivity) and the safety systems which have to mitigate the effects. This is realized based on accident analysis (from Safety Report). Also where taken in to account the 4 basic Safety Principles: 'Reactor shut down; Residual heat removal; Radioactivity products confinement; NPP status monitoring in normal and accident conditions'. Following step is to establish safety support systems, which have to action to assure main safety systems operation. This could be realized based on engineering judgement, or on PSA Level I analysis. Finally shall be realized chains of the support systems, which have to work, till primary systems. For the critical components selection, was realized a Failure Mode and Effect Analysis (FMEA), considering the components effects of failures, on system safety function. 2. Systems and components screening for plant availability assurance The work was realized in two steps: Systems screening; Components screening The systems screening, included: General, analyze of the plant systems list and the definition of those which clearly have to run continue to assure the nominal power; Realization of a complex diagram to define interdependence between the systems (e.g. PHT and auxiliaries, moderator and auxiliaries, plant electrical diagram); Fill of special

  19. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  20. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  1. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  2. F4E studies for the electromagnetic analysis of ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, P., E-mail: pietro.testoni@f4e.europa.eu [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Cau, F.; Portone, A. [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Albanese, R. [Associazione EURATOM/ENEA/CREATE, DIETI, Università Federico II di Napoli, Napoli (Italy); Juirao, J. [Numerical Analysis TEChnologies S.L. (NATEC), c/ Marqués de San Esteban, 52 Entlo D Gijón (Spain)

    2014-10-15

    Highlights: • Several ITER components have been analyzed from the electromagnetic point of view. • Categorization of DINA load cases is described. • VDEs, MDs and MFD have been studied. • Integral values of forces and moments components versus time have been computed for all the ITER components under study. - Abstract: Fusion for Energy (F4E) is involved in a relevant number of activities in the area of electromagnetic analysis in support of ITER general design and EU in-kind procurement. In particular several ITER components (vacuum vessel, blanket shield modules and first wall panels, test blanket modules, ICRH antenna) are being analyzed from the electromagnetic point of view. In this paper we give an updated description of our main activities, highlighting the main assumptions, objectives, results and conclusions. The plasma instabilities we consider, typically disruptions and VDEs, can be both toroidally symmetric and asymmetric. This implies that, depending on the specific component and loading conditions, FE models we use span from a sector of 10 up to 360° of the ITER machine. The techniques for simulating the electromagnetic phenomena involved in a disruption and the postprocessing of the results to obtain the loads acting on the structures are described. Finally we summarize the typical loads applied to different components and give a critical view of the results.

  3. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  4. Principal component analysis of the Norwegian version of the quality of life in late-stage dementia scale.

    Science.gov (United States)

    Mjørud, Marit; Kirkevold, Marit; Røsvik, Janne; Engedal, Knut

    2014-01-01

    To investigate which factors the Quality of Life in Late-Stage Dementia (QUALID) scale holds when used among people with dementia (pwd) in nursing homes and to find out how the symptom load varies across the different severity levels of dementia. We included 661 pwd [mean age ± SD, 85.3 ± 8.6 years; 71.4% women]. The QUALID and the Clinical Dementia Rating (CDR) scale were applied. A principal component analysis (PCA) with varimax rotation and Kaiser normalization was applied to test the factor structure. Nonparametric analyses were applied to examine differences of symptom load across the three CDR groups. The mean QUALID score was 21.5 (±7.1), and the CDR scores of the three groups were 1 in 22.5%, 2 in 33.6% and 3 in 43.9%. The results of the statistical measures employed were the following: Crohnbach's α of QUALID, 0.74; Bartlett's test of sphericity, p Kaiser-Meyer-Olkin measure, 0.77. The PCA analysis resulted in three components accounting for 53% of the variance. The first component was 'tension' ('facial expression of discomfort', 'appears physically uncomfortable', 'verbalization suggests discomfort', 'being irritable and aggressive', 'appears calm', Crohnbach's α = 0.69), the second was 'well-being' ('smiles', 'enjoys eating', 'enjoys touching/being touched', 'enjoys social interaction', Crohnbach's α = 0.62) and the third was 'sadness' ('appears sad', 'cries', 'facial expression of discomfort', Crohnbach's α 0.65). The mean score on the components 'tension' and 'well-being' increased significantly with increasing severity levels of dementia. Three components of quality of life (qol) were identified. Qol decreased with increasing severity of dementia. © 2013 S. Karger AG, Basel.

  5. Analysis the Appropriate using Standard Costing Applying in Land Cost Component of Real Estate Development Activities: A Case Study of PT Subur Agung

    Directory of Open Access Journals (Sweden)

    Elfrida Yanti

    2011-05-01

    Full Text Available Standard cost is generally used by manufacturing business, which direct material, labor, and factory overhead are cleared allocated. On real estate business in this case PT Subur Agung use standard cost based on three costs, raw land, land improvement and interest expense categories instead of direct material, direct labor and overhead. Developer use these cost to predict the project cost and estimate the pre-selling price, in accordance with the cost estimation classification matrix, the variance range is in the expected accuracy rate by testing the variance percentage between standard cost and actual cost. The additional similar projects in PT Subur Agung also follow the same scope. All these evidences have proved the appropriate using standard costing in land cost component of real estate development activities but how it applies this article will analyze in this particular project with using descriptive and exploratory method. The analysis started by knowing the conceptual situation of PT Subur Agung and the data was presented in tables and calculation with detail explanation. 

  6. Principal Component Analysis-Based Pattern Analysis of Dose-Volume Histograms and Influence on Rectal Toxicity

    International Nuclear Information System (INIS)

    Soehn, Matthias; Alber, Markus; Yan Di

    2007-01-01

    Purpose: The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. Methods and Materials: PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as 'eigenmodes,' which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Results: Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe ∼94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses (∼40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. Conclusions: PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches

  7. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  8. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  9. Principal Component Analysis Based Two-Dimensional (PCA-2D) Correlation Spectroscopy: PCA Denoising for 2D Correlation Spectroscopy

    International Nuclear Information System (INIS)

    Jung, Young Mee

    2003-01-01

    Principal component analysis based two-dimensional (PCA-2D) correlation analysis is applied to FTIR spectra of polystyrene/methyl ethyl ketone/toluene solution mixture during the solvent evaporation. Substantial amount of artificial noise were added to the experimental data to demonstrate the practical noise-suppressing benefit of PCA-2D technique. 2D correlation analysis of the reconstructed data matrix from PCA loading vectors and scores successfully extracted only the most important features of synchronicity and asynchronicity without interference from noise or insignificant minor components. 2D correlation spectra constructed with only one principal component yield strictly synchronous response with no discernible a asynchronous features, while those involving at least two or more principal components generated meaningful asynchronous 2D correlation spectra. Deliberate manipulation of the rank of the reconstructed data matrix, by choosing the appropriate number and type of PCs, yields potentially more refined 2D correlation spectra

  10. Authenticity analysis of citrus essential oils by HPLC-UV-MS on oxygenated heterocyclic components

    Directory of Open Access Journals (Sweden)

    Hao Fan

    2015-03-01

    Full Text Available Citrus essential oils are widely applied in food industry as the backbone of citrus flavors. Unfortunately, due to relatively simple chemical composition and tremendous price differences among citrus species, adulteration has been plaguing the industry since its inception. Skilled blenders are capable of making blends that are almost indistinguishable from authentic oils through conventional gas chromatography analysis. A reversed-phase high performance liquid chromatography (HPLC method was developed for compositional study of nonvolatile constituents in essential oils from major citrus species. The nonvolatile oxygenated heterocyclic components identified in citrus oils were proved to be more effective as markers in adulteration detection than the volatile components. Authors are hoping such an analysis procedure can be served as a routine quality control test for authenticity evaluation in citrus essential oils.

  11. Understanding deformation mechanisms during powder compaction using principal component analysis of compression data.

    Science.gov (United States)

    Roopwani, Rahul; Buckner, Ira S

    2011-10-14

    Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Effects of Applied Nitrogen Amounts on the Functional Components of Mulberry (Morus alba L.) Leaves.

    Science.gov (United States)

    Sugiyama, Mari; Takahashi, Makoto; Katsube, Takuya; Koyama, Akio; Itamura, Hiroyuki

    2016-09-21

    This study investigated the effects of applied nitrogen amounts on specific functional components in mulberry (Morus alba L.) leaves. The relationships between mineral elements and the functional components in mulberry leaves were examined using mulberry trees cultivated in different soil conditions in four cultured fields. Then, the relationships between the nitrogen levels and the leaf functional components were studied by culturing mulberry in plastic pots and experimental fields. In the common cultured fields, total nitrogen was negatively correlated with the chlorogenic acid content (R(2) = -0.48) and positively correlated with the 1-deoxynojirimycin content (R(2) = 0.60). Additionally, differences in nitrogen fertilizer application levels affected each functional component in mulberry leaves. For instance, with increased nitrogen levels, the chlorogenic acid and flavonol contents significantly decreased, but the 1-deoxynojirimycin content significantly increased. Selection of the optimal nitrogen application level is necessary to obtain the desired functional components from mulberry leaves.

  13. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  14. A comparative and combined study of EMIS and GPR detectors by the use of Independent Component Analysis

    DEFF Research Database (Denmark)

    Morgenstjerne, Axel; Karlsen, Brian; Larsen, Jan

    2005-01-01

    Independent Component Analysis (ICA) is applied to classify unexploded ordnance (UXO) on laboratory UXO test-field data, acquired by stand-off detection. The data are acquired by an Electromagnetic Induction Spectroscopy (EMIS) metal detector and a ground penetrating radar (GPR) detector. The metal...

  15. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  16. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  17. Characterization of reflectance variability in the industrial paint application of automotive metallic coatings by using principal component analysis

    Science.gov (United States)

    Medina, José M.; Díaz, José A.

    2013-05-01

    We have applied principal component analysis to examine trial-to-trial variability of reflectances of automotive coatings that contain effect pigments. Reflectance databases were measured from different color batch productions using a multi-angle spectrophotometer. A method to classify the principal components was used based on the eigenvalue spectra. It was found that the eigenvalue spectra follow distinct power laws and depend on the detection angle. The scaling exponent provided an estimation of the correlation between reflectances and it was higher near specular reflection, suggesting a contribution from the deposition of effect pigments. Our findings indicate that principal component analysis can be a useful tool to classify different sources of spectral variability in color engineering.

  18. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  19. Conversation Analysis in Applied Linguistics

    DEFF Research Database (Denmark)

    Kasper, Gabriele; Wagner, Johannes

    2014-01-01

    on applied CA, the application of basic CA's principles, methods, and findings to the study of social domains and practices that are interactionally constituted. We consider three strands—foundational, social problem oriented, and institutional applied CA—before turning to recent developments in CA research...... on learning and development. In conclusion, we address some emerging themes in the relationship of CA and applied linguistics, including the role of multilingualism, standard social science methods as research objects, CA's potential for direct social intervention, and increasing efforts to complement CA......For the last decade, conversation analysis (CA) has increasingly contributed to several established fields in applied linguistics. In this article, we will discuss its methodological contributions. The article distinguishes between basic and applied CA. Basic CA is a sociological endeavor concerned...

  20. Principal component analysis of solar flares in the soft X-ray flux

    International Nuclear Information System (INIS)

    Teuber, D.L.; Reichmann, E.J.; Wilson, R.M.; National Aeronautics and Space Administration, Huntsville, AL

    1979-01-01

    Principal component analysis is a technique for extracting the salient features from a mass of data. It applies, in particular, to the analysis of nonstationary ensembles. Computational schemes for this task require the evaluation of eigenvalues of matrices. We have used EISPACK Matrix Eigen System Routines on an IBM 360-75 to analyze full-disk proportional-counter data from the X-ray event analyzer (X-REA) which was part of the Skylab ATM/S-056 experiment. Empirical orthogonal functions have been derived for events in the soft X-ray spectrum between 2.5 and 20 A during different time frames between June 1973 and January 1974. Results indicate that approximately 90% of the cumulative power of each analyzed flare is contained in the largest eigenvector. The first two largest eigenvectors are sufficient for an empirical curve-fit through the raw data and a characterization of solar flares in the soft X-ray flux. Power spectra of the two largest eigenvectors reveal a previously reported periodicity of approximately 5 min. Similar signatures were also obtained from flares that are synchronized on maximum pulse-height when subjected to a principal component analysis. (orig.)

  1. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  2. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  3. Classification of calcium supplements through application of principal component analysis: a study by inaa and aas

    International Nuclear Information System (INIS)

    Waheed, S.; Rahman, S.; Siddique, N.

    2013-01-01

    Different types of Ca supplements are available in the local markets of Pakistan. It is sometimes difficult to classify these with respect to their composition. In the present work principal component analysis (PCA) technique was applied to classify different Ca supplements on the basis of their elemental data obtained using instrumental neutron activation analysis (INAA) and atomic absorption spectrometry (AAS) techniques. The graphical representation of principal component analysis (PCA) scores utilizing intricate analytical data successfully generated four different types of Ca supplements with compatible samples grouped together. These included Ca supplements with CaCO/sub 3/as Ca source along with vitamin C, the supplements with CaCO/sub 3/ as Ca source along with vitamin D, Supplements with Ca from bone meal and supplements with chelated calcium. (author)

  4. Gene Module Identification from Microarray Data Using Nonnegative Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Ting Gong

    2007-01-01

    Full Text Available Genes mostly interact with each other to form transcriptional modules for performing single or multiple functions. It is important to unravel such transcriptional modules and to determine how disturbances in them may lead to disease. Here, we propose a non-negative independent component analysis (nICA approach for transcriptional module discovery. nICA method utilizes the non-negativity constraint to enforce the independence of biological processes within the participated genes. In such, nICA decomposes the observed gene expression into positive independent components, which fi ts better to the reality of corresponding putative biological processes. In conjunction with nICA modeling, visual statistical data analyzer (VISDA is applied to group genes into modules in latent variable space. We demonstrate the usefulness of the approach through the identification of composite modules from yeast data and the discovery of pathway modules in muscle regeneration.

  5. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  6. 21 CFR 111.455 - What requirements apply to holding components, dietary supplements, packaging, and labels?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to holding components, dietary supplements, packaging, and labels? 111.455 Section 111.455 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD...

  7. Problems of stress analysis of fuelling machine head components

    International Nuclear Information System (INIS)

    Mathur, D.D.

    1975-01-01

    The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)

  8. Trend and pattern analysis of failures of main feedwater system components in United States commercial nuclear power plants

    International Nuclear Information System (INIS)

    Gentillon, C.D.; Meachum, T.R.; Brady, B.M.

    1987-01-01

    The goal of the trend and pattern analysis of MFW (main feedwater) component failure data is to identify component attributes that are associated with relatively high incidences of failure. Manufacturer, valve type, and pump rotational speed are examples of component attributes under study; in addition, the pattern of failures among NPP units is studied. A series of statistical methods is applied to identify trends and patterns in failures and trends in occurrences in time with regard to these component attributes or variables. This process is followed by an engineering evaluation of the statistical results. In the remainder of this paper, the characteristics of the NPRDS that facilitate its use in reliability and risk studies are highlighted, the analysis methods are briefly described, and the lessons learned thus far for improving MFW system availability and reliability are summarized (orig./GL)

  9. An Introductory Application of Principal Components to Cricket Data

    Science.gov (United States)

    Manage, Ananda B. W.; Scariano, Stephen M.

    2013-01-01

    Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…

  10. Principal components analysis of protein structure ensembles calculated using NMR data

    International Nuclear Information System (INIS)

    Howe, Peter W.A.

    2001-01-01

    One important problem when calculating structures of biomolecules from NMR data is distinguishing converged structures from outlier structures. This paper describes how Principal Components Analysis (PCA) has the potential to classify calculated structures automatically, according to correlated structural variation across the population. PCA analysis has the additional advantage that it highlights regions of proteins which are varying across the population. To apply PCA, protein structures have to be reduced in complexity and this paper describes two different representations of protein structures which achieve this. The calculated structures of a 28 amino acid peptide are used to demonstrate the methods. The two different representations of protein structure are shown to give equivalent results, and correct results are obtained even though the ensemble of structures used as an example contains two different protein conformations. The PCA analysis also correctly identifies the structural differences between the two conformations

  11. Demonstration of statistical approaches to identify component's ageing by operational data analysis-A case study for the ageing PSA network

    International Nuclear Information System (INIS)

    Rodionov, Andrei; Atwood, Corwin L.; Kirchsteiger, Christian; Patrik, Milan

    2008-01-01

    The paper presents some results of a case study on 'Demonstration of statistical approaches to identify the component's ageing by operational data analysis', which was done in the frame of the EC JRC Ageing PSA Network. Several techniques: visual evaluation, nonparametric and parametric hypothesis tests, were proposed and applied in order to demonstrate the capacity, advantages and limitations of statistical approaches to identify the component's ageing by operational data analysis. Engineering considerations are out of the scope of the present study

  12. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    HRISTIAN Liliana

    2017-05-01

    Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.

  13. 21 CFR 212.60 - What requirements apply to the laboratories where I test components, in-process materials, and...

    Science.gov (United States)

    2010-04-01

    ... maintenance. Each laboratory must have and follow written procedures to ensure that equipment is routinely... 21 Food and Drugs 4 2010-04-01 2010-04-01 false What requirements apply to the laboratories where...) Laboratory Controls § 212.60 What requirements apply to the laboratories where I test components, in-process...

  14. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  15. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  16. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  17. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  18. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  19. Moving Forward: Positive Behavior Support and Applied Behavior Analysis

    Science.gov (United States)

    Tincani, Matt

    2007-01-01

    A controversy has emerged about the relationship between positive behavior support and applied behavior analysis. Some behavior analysts suggest that positive behavior support and applied behavior analysis are the same (e.g., Carr & Sidener, 2002). Others argue that positive behavior support is harmful to applied behavior analysis (e.g., Johnston,…

  20. The Effect of Malrotation of Tibial Component of Total Knee Arthroplasty on Tibial Insert during High Flexion Using a Finite Element Analysis

    Directory of Open Access Journals (Sweden)

    Kei Osano

    2014-01-01

    Full Text Available One of the most common errors of total knee arthroplasty procedure is a malrotation of tibial component. The stress on tibial insert is closely related to polyethylene failure. The objective of this study is to analyze the effect of malrotation of tibial component for the stress on tibial insert during high flexion using a finite element analysis. We used Stryker NRG PS for analysis. Three different initial conditions of tibial component including normal, 15° internal malrotation, and 15° external malrotation were analyzed. The tibial insert made from ultra-high-molecular-weight polyethylene was assumed to be elastic-plastic while femoral and tibial metal components were assumed to be rigid. Four nonlinear springs attached to tibial component represented soft tissues around the knee. Vertical load was applied to femoral component which rotated from 0° to 135° while horizontal load along the anterior posterior axis was applied to tibial component during flexion. Maximum equivalent stresses on the surface were analyzed. Internal malrotation caused the highest stress which arose up to 160% of normal position. External malrotation also caused higher stress. Implanting prosthesis in correct position is important for reducing the risk of abnormal wear and failure.

  1. Applying reliability analysis to design electric power systems for More-electric aircraft

    Science.gov (United States)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  2. Enhancing the discussion of alternatives in EIA using principle component analysis leads to improved public involvement

    International Nuclear Information System (INIS)

    Kamijo, Tetsuya; Huang, Guangwei

    2017-01-01

    The purpose of this study is to show the effectiveness of principle component analysis (PCA) as a method of alternatives analysis useful for improving the discussion of alternatives and public involvement. This study examined public consultations by applying quantitative text analysis (QTA) to the minutes of meetings and showed a positive correlation between the discussion of alternatives and the sense of public involvement. The discussion of alternatives may improve public involvement. A table of multiple criteria analysis for alternatives with detailed scores may exclude the public from involvement due to the general public's limited capacity to understand the mathematical algorithm and to process too much information. PCA allowed for the reduction of multiple criteria down to a small number of uncorrelated variables (principle components), a display of the merits and demerits of the alternatives, and potentially made the identification of preferable alternatives by the stakeholders easier. PCA is likely to enhance the discussion of alternatives and as a result, lead to improved public involvement.

  3. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    Science.gov (United States)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  4. Dynamic Modal Analysis of Vertical Machining Centre Components

    OpenAIRE

    Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh

    2009-01-01

    The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...

  5. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  6. A short study to assess the potential of independent component analysis for motion artifact separation in wearable pulse oximeter signals.

    Science.gov (United States)

    Yao, Jianchu; Warren, Steve

    2005-01-01

    Motion artifact reduction and separation become critical when medical sensors are used in wearable monitoring scenarios. Previous research has demonstrated that independent component analysis (ICA) can be applied to pulse oximeter signals to separate photoplethysmographic (PPG) data from motion artifacts, ambient light, and other interference in low-motion environments. However, ICA assumes that all source signal component pairs are mutually independent. It is important to assess the statistical independence of the source components in PPG data, especially if ICA is to be applied in ambulatory monitoring environments, where motion artifacts can have a substantial effect on the quality of data received from light-based sensors. This paper addresses the statistical relationship between motion artifacts and PPG data by calculating the correlation coefficients between arterial volume variations and motion over a range of stationary to high-motion conditions. Analyses indicate that motion significantly affects arterial flow, so care must be taken when applying ICA to light-based sensor data acquired from wearable platforms.

  7. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  8. Measuring farm sustainability using data envelope analysis with principal components: the case of Wisconsin cranberry.

    Science.gov (United States)

    Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed

    2015-01-01

    Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Dual-energy x-ray image decomposition by independent component analysis

    Science.gov (United States)

    Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang

    2001-09-01

    The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.

  10. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  11. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  12. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  13. Effects of physiotherapy treatment on knee osteoarthritis gait data using principal component analysis.

    Science.gov (United States)

    Gaudreault, Nathaly; Mezghani, Neila; Turcot, Katia; Hagemeister, Nicola; Boivin, Karine; de Guise, Jacques A

    2011-03-01

    Interpreting gait data is challenging due to intersubject variability observed in the gait pattern of both normal and pathological populations. The objective of this study was to investigate the impact of using principal component analysis for grouping knee osteoarthritis (OA) patients' gait data in more homogeneous groups when studying the effect of a physiotherapy treatment. Three-dimensional (3D) knee kinematic and kinetic data were recorded during the gait of 29 participants diagnosed with knee OA before and after they received 12 weeks of physiotherapy treatment. Principal component analysis was applied to extract groups of knee flexion/extension, adduction/abduction and internal/external rotation angle and moment data. The treatment's effect on parameters of interest was assessed using paired t-tests performed before and after grouping the knee kinematic data. Increased quadriceps and hamstring strength was observed following treatment (Pphysiotherapy on gait mechanics of knee osteoarthritis patients may be masked or underestimated if kinematic data are not separated into more homogeneous groups when performing pre- and post-treatment comparisons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Variational Bayesian Learning for Wavelet Independent Component Analysis

    Science.gov (United States)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  15. Detection of explosives on the surface of banknotes by Raman hyperspectral imaging and independent component analysis.

    Science.gov (United States)

    Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J

    2015-02-20

    The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  17. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  18. Data analysis of x-ray fluorescence holography by subtracting normal component from inverse hologram

    International Nuclear Information System (INIS)

    Happo, Naohisa; Hayashi, Kouichi; Hosokawa, Shinya

    2010-01-01

    X-ray fluorescence holography (XFH) is a powerful technique for determining three-dimensional local atomic arrangements around a specific fluorescing element. However, the raw experimental hologram is predominantly a mixed hologram, i.e., a mixture of hologram generated in both normal and inverse modes, which produces unreliable atomic images. In this paper, we propose a practical subtraction method of the normal component from the inverse XFH data by a Fourier transform for the calculated hologram of a model ZnTe cluster. Many spots originating from the normal components could be properly removed using a mask function, and clear atomic images were reconstructed at adequate positions of the model cluster. This method was successfully applied to the analysis of experimental ZnTe single crystal XFH data. (author)

  19. Portable XRF and principal component analysis for bill characterization in forensic science.

    Science.gov (United States)

    Appoloni, C R; Melquiades, F L

    2014-02-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  1. Analysis methods for structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Sievers, J.

    2004-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  2. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    Science.gov (United States)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  3. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    Directory of Open Access Journals (Sweden)

    Asachi Maryam

    2017-01-01

    Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  4. System diagnostics using qualitative analysis and component functional classification

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures

  5. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    Directory of Open Access Journals (Sweden)

    Balbir Singh

    2017-01-01

    Full Text Available EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA, which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies.

  6. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    Science.gov (United States)

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  7. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography.

    Science.gov (United States)

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  8. A seismic analysis of nuclear power plant components subjected to multi-excitations of earthquakes

    International Nuclear Information System (INIS)

    Ichiki, T.; Matsumoto, T.; Gunyasu, K.

    1977-01-01

    In this analysis, the modal analysis methods are used to determine the seismic responses of structural systems instead of the direct integration method. These results have been compared with some kinds of other analytical methods, and investigated the accuracy of numerical results of these analysis, applying to such components as Reactor Pressure Vessel and Reactor Internals of an actual plant. The results of this method of analysis are summarized as follows: (1) one of the seismic analysis methods concerning systems subjected to multi-excitations of earthquakes has been presented to the conference of JSME. Although the analytical theory presented to that conference is correct, it has a serious problem about the accuracy of numerical results. This computer program and theory cannot be used practically due to the time necessary to calculate. However, the method described in this paper overcomes those serious problems stated above and has no problem about the computer time and precision. So, it is possible to apply this method to the seismic design of an actual nuclear power plant practically. (2) The feed back effects of the seismic responses of Reactor Internals to Reactor Building are considered so small that we can separate the model of Reactor Internals from Reactor Building. (3) The results of seismic response of Reactor Internals are fairly consistent with those obtained from the model coupled with Reactor Building. (4) This analysis method can be extended to the model of Reactor Internals subjected to more than two random excitations of earthquakes. (5) It is possible that this analysis method is also applied to the seismic analysis of such three-dimensional systems as piping systems subjected to multi-excitations of earthquakes

  9. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    OpenAIRE

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  10. Comparative analysis of quality assurance systems which effectively control, review and verify the quality of components manufactured for liquid metal cooled fast breeder reactors within the EEC

    International Nuclear Information System (INIS)

    Benn, L.A.

    1985-01-01

    Comparative analyses are made of Quality Assurance Systems, by techniques and the methodology used, for the manufacture of component parts for the Liquid Metal Cooled Fast Breeder Reactor (LMFBR) within the EEC. Two differing alternative systems are presented in the analysis. First, a tabulated analytical treatment which analyses 14 codes and standards relating to Quality Assurance which can be applied to LMFBR's. The comparison equates equivalent clauses between codes and standards followed by an analysis of individual clauses in tabular form, the International Standard ISO 6215. A statistical summary and recommendations conclude this analysis. The second alternative system used in the comparison is a descriptive analytical method applied to 9 selected codes and standards relating to Quality Assurance based on the 13 criteria of the International IAEA Code of Practice no. 50 C.QA entitled ''Quality Assurance for Safety in Nuclear Power Plants''. An investigation is then made of the state of the art on the subject of classification of component parts bearing generally on Quality Assurance. The method of classification is segregated into General, Safety and Inspection categories. A summary of destructive and non destructive controls that may be applied during the manufacture of LMFBR components is given, together with tests that may be applied to selected components, namely Primary Tank, Secondary Sodium Pump and the Primary Cold Trap allocated to Safety Classes, 1, 2 and 3 respectively. The report concludes with a summary of typical records produced at the delivery of a component

  11. Building an applied activation analysis centre

    International Nuclear Information System (INIS)

    Bartosek, J.; Kasparec, I.; Masek, J.

    1972-01-01

    Requirements are defined and all available background material is reported and discussed for the building up of a centre of applied activation analysis in Czechoslovakia. A detailed analysis of potential users and the centre's envisaged availability is also presented as part of the submitted study. A brief economic analysis is annexed. The study covers the situation up to the end of 1972. (J.K.)

  12. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  13. Comparing in-service multi-input loads applied on non-stiff components submitted to vibration fatigue to provide specifications for robust design

    Directory of Open Access Journals (Sweden)

    Le Corre Gwenaëlle

    2018-01-01

    Full Text Available This study focuses on applications from the automotive industry, on mechanical components submitted to vibration loads. On one hand, the characterization of loading for dimensioning new structures in fatigue is enriched and updated by customer data analysis. On the other hand, the loads characterization also aims to provide robust specifications for simulation or physical tests. These specifications are needed early in the project, in order to perform the first durability verification activities. At this time, detailed information about the geometry and the material is rare. Vibration specifications need to be adapted to a calculation time or physical test durations in accordance with the pace imposed by the projects timeframe. In the trucks industry, the dynamic behaviour can vary significantly from one configuration of truck to another, as the trucks architecture impacts the load environment of the components. The vibration specifications need to be robust by taking care of the diversity of vehicles and markets considered in the scope of the projects. For non-stiff structures, the lifetime depends, among other things, on the frequency content of the loads, as well as the interactions between the components of the multi-input loads. In this context, this paper proposes an approach to compare sets of variable amplitude multi-input loads applied on non-stiff structures. The comparison is done in terms of damage, with limited information on the structure where the loads sets are applied on. The methodology is presented, as well as an application. Activities planned to validate the methodology are also exposed.

  14. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  15. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  16. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  17. A meta-analysis of executive components of working memory.

    Science.gov (United States)

    Nee, Derek Evan; Brown, Joshua W; Askren, Mary K; Berman, Marc G; Demiralp, Emre; Krawitz, Adam; Jonides, John

    2013-02-01

    Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM "work." However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.

  18. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  19. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  20. Dynamic of consumer groups and response of commodity markets by principal component analysis

    Science.gov (United States)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  1. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  2. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  3. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  4. Caldwell University's Department of Applied Behavior Analysis.

    Science.gov (United States)

    Reeve, Kenneth F; Reeve, Sharon A

    2016-05-01

    Since 2004, faculty members at Caldwell University have developed three successful graduate programs in Applied Behavior Analysis (i.e., PhD, MA, non-degree programs), increased program faculty from two to six members, developed and operated an on-campus autism center, and begun a stand-alone Applied Behavior Analysis Department. This paper outlines a number of strategies used to advance these initiatives, including those associated with an extensive public relations campaign. We also outline challenges that have limited our programs' growth. These strategies, along with a consideration of potential challenges, might prove useful in guiding academicians who are interested in starting their own programs in behavior analysis.

  5. Vibrational spectroscopy and principal component analysis for conformational study of virus nucleic acids

    Science.gov (United States)

    Dovbeshko, G. I.; Repnytska, O. P.; Pererva, T.; Miruta, A.; Kosenkov, D.

    2004-07-01

    Conformation analysis of mutated DNA-bacteriophages (PLys-23, P23-2, P47- the numbers have been assigned by T. Pererva) induced by MS2 virus incorporated in Ecoli AB 259 Hfr 3000 has been done. Surface enhanced infrared absorption (SEIRA) spectroscopy and principal component analysis has been applied for solving this problem. The nucleic acids isolated from the mutated phages had a form of double stranded DNA with different modifications. The nucleic acid from phage P47 was undergone the structural rearrangement in the most degree. The shape and position ofthe fine structure of the Phosphate asymmetrical band at 1071cm-1 as well as the stretching OH vibration at 3370-3390 cm-1 has indicated to the appearance ofadditional OH-groups. The Z-form feature has been found in the base vibration region (1694 cm-1) and the sugar region (932 cm-1). A supposition about modification of structure of DNA by Z-fragments for P47 phage has been proposed. The P23-2 and PLys-23 phages have showed the numerous minor structural changes also. On the basis of SEIRA spectra we have determined the characteristic parameters of the marker bands of nucleic acid used for construction of principal components. Contribution of different spectral parameters of nucleic acids to principal components has been estimated.

  6. An application of principal component analysis to the clavicle and clavicle fixation devices.

    Science.gov (United States)

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-03-26

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  7. INCREMENTAL PRINCIPAL COMPONENT ANALYSIS BASED OUTLIER DETECTION METHODS FOR SPATIOTEMPORAL DATA STREAMS

    Directory of Open Access Journals (Sweden)

    A. Bhushan

    2015-07-01

    Full Text Available In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  8. A novel approach to analyzing fMRI and SNP data via parallel independent component analysis

    Science.gov (United States)

    Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas

    2007-03-01

    There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.

  9. Registration of dynamic dopamine D{sub 2}receptor images using principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Acton, P.D.; Ell, P.J. [Institute of Nuclear Medicine, University College London Medical School, London (United Kingdom); Pilowsky, L.S.; Brammer, M.J. [Institute of Psychiatry, De Crespigny Park, London (United Kingdom); Suckling, J. [Clinical Age Research Unit, Kings College School of Medicine and Dentistry, London (United Kingdom)

    1997-11-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D{sub 2}receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D {sub 2}receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P <0.001) improvements in image registration, leading to alignment errors in x and y of about 25% of the alternative methods, with reductions in autocorrelations over time. It could also be applied to align image sequences which the other methods failed completely to register, particularly {sup 123}I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the {chi}{sup 2}of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.) With 4 figs., 2 tabs., 26 refs.

  10. Approximate Analysis of Multi-State Weighted k-Out-of-n Systems Applied to Transmission Lines

    Directory of Open Access Journals (Sweden)

    Xiaogang Song

    2017-10-01

    Full Text Available Multi-state weighted k-out-of-n systems are widely applied in various scenarios, such as multiple line (power/oil transmission line transmission systems where the capability of fault tolerance is desirable. However, the complex operating environment and the dynamic features of load demands influence the evaluation of system reliability. In this paper, a stochastic multiple-valued (SMV approach is proposed to efficiently predict the reliability of two models of systems with non-repairable components and dynamically repairable components. The weights/performances and reliabilities of multi-state components (MSCs are represented by stochastic sequences consisting of a fixed number of multi-state values with the positions being randomly permutated. Using stochastic sequences with L multiple values, linear computational complexities with parameters n and L are required by the SMV approach to compute the reliability of different multi-state k-out-of-n systems at a reasonable accuracy, compared to the complexities of universal generating functions (UGF and fuzzy universal generating functions (FUGF that increase exponentially with the value of n. The analysis of two benchmarks shows that the proposed SMV approach is more efficient than the analysis using UGF or FUGF.

  11. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  12. Post annealing performance evaluation of printable interdigital capacitive sensors by principal component analysis

    KAUST Repository

    Zia, Asif Iqbal

    2015-06-01

    The surface roughness of thin-film gold electrodes induces instability in impedance spectroscopy measurements of capacitive interdigital printable sensors. Post-fabrication thermodynamic annealing was carried out at temperatures ranging from 30 °C to 210 °C in a vacuum oven and the variation in surface morphology of thin-film gold electrodes was observed by scanning electron microscopy. Impedance spectra obtained at different temperatures were translated into equivalent circuit models by applying complex nonlinear least square curve-fitting algorithm. Principal component analysis was applied to deduce the classification of the parameters affected due to the annealing process and to evaluate the performance stability using mathematical model. Physics of the thermodynamic annealing was discussed based on the surface activation energies. The post anneal testing of the sensors validated the achieved stability in impedance measurement. © 2001-2012 IEEE.

  13. Post annealing performance evaluation of printable interdigital capacitive sensors by principal component analysis

    KAUST Repository

    Zia, Asif Iqbal; Mukhopadhyay, Subhas Chandra; Yu, Paklam; Al-Bahadly, Ibrahim H.; Gooneratne, Chinthaka Pasan; Kosel, Jü rgen

    2015-01-01

    The surface roughness of thin-film gold electrodes induces instability in impedance spectroscopy measurements of capacitive interdigital printable sensors. Post-fabrication thermodynamic annealing was carried out at temperatures ranging from 30 °C to 210 °C in a vacuum oven and the variation in surface morphology of thin-film gold electrodes was observed by scanning electron microscopy. Impedance spectra obtained at different temperatures were translated into equivalent circuit models by applying complex nonlinear least square curve-fitting algorithm. Principal component analysis was applied to deduce the classification of the parameters affected due to the annealing process and to evaluate the performance stability using mathematical model. Physics of the thermodynamic annealing was discussed based on the surface activation energies. The post anneal testing of the sensors validated the achieved stability in impedance measurement. © 2001-2012 IEEE.

  14. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  15. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  16. Analysis of the interaction between experimental and applied behavior analysis.

    Science.gov (United States)

    Virues-Ortega, Javier; Hurtado-Parrado, Camilo; Cox, Alison D; Pear, Joseph J

    2014-01-01

    To study the influences between basic and applied research in behavior analysis, we analyzed the coauthorship interactions of authors who published in JABA and JEAB from 1980 to 2010. We paid particular attention to authors who published in both JABA and JEAB (dual authors) as potential agents of cross-field interactions. We present a comprehensive analysis of dual authors' coauthorship interactions using social networks methodology and key word analysis. The number of dual authors more than doubled (26 to 67) and their productivity tripled (7% to 26% of JABA and JEAB articles) between 1980 and 2010. Dual authors stood out in terms of number of collaborators, number of publications, and ability to interact with multiple groups within the field. The steady increase in JEAB and JABA interactions through coauthors and the increasing range of topics covered by dual authors provide a basis for optimism regarding the progressive integration of basic and applied behavior analysis. © Society for the Experimental Analysis of Behavior.

  17. 21 CFR 111.170 - What requirements apply to rejected components, packaging, and labels, and to rejected products...

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to rejected components... supplement? 111.170 Section 111.170 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING PRACTICE IN...

  18. Registration of dynamic dopamine D2receptor images using principal component analysis

    International Nuclear Information System (INIS)

    Acton, P.D.; Ell, P.J.; Pilowsky, L.S.; Brammer, M.J.; Suckling, J.

    1997-01-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D 2 receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D 2 receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P 123 I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the χ 2 of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.)

  19. Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...

    Indian Academy of Sciences (India)

    Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...

  20. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    Science.gov (United States)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  1. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  2. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    Science.gov (United States)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  3. Application of time series analysis on molecular dynamics simulations of proteins: a study of different conformational spaces by principal component analysis.

    Science.gov (United States)

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C

    2004-09-08

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics

  4. Applied Behavior Analysis and Statistical Process Control?

    Science.gov (United States)

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  5. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  6. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  7. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  8. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  9. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  10. BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Andrea Giovanni Spelta

    2007-05-01

    Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.

  11. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  12. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    Science.gov (United States)

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Clustering analysis of water distribution systems: identifying critical components and community impacts.

    Science.gov (United States)

    Diao, K; Farmani, R; Fu, G; Astaraie-Imani, M; Ward, S; Butler, D

    2014-01-01

    Large water distribution systems (WDSs) are networks with both topological and behavioural complexity. Thereby, it is usually difficult to identify the key features of the properties of the system, and subsequently all the critical components within the system for a given purpose of design or control. One way is, however, to more explicitly visualize the network structure and interactions between components by dividing a WDS into a number of clusters (subsystems). Accordingly, this paper introduces a clustering strategy that decomposes WDSs into clusters with stronger internal connections than external connections. The detected cluster layout is very similar to the community structure of the served urban area. As WDSs may expand along with urban development in a community-by-community manner, the correspondingly formed distribution clusters may reveal some crucial configurations of WDSs. For verification, the method is applied to identify all the critical links during firefighting for the vulnerability analysis of a real-world WDS. Moreover, both the most critical pipes and clusters are addressed, given the consequences of pipe failure. Compared with the enumeration method, the method used in this study identifies the same group of the most critical components, and provides similar criticality prioritizations of them in a more computationally efficient time.

  14. An application of principal component analysis to the clavicle and clavicle fixation devices

    Directory of Open Access Journals (Sweden)

    Fitzpatrick David

    2010-03-01

    Full Text Available Abstract Background Principal component analysis (PCA enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and methods Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. Results The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. Discussion And Conclusions This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  15. Towards intelligent video understanding applied to plasma facing component monitoring

    International Nuclear Information System (INIS)

    Martin, V.; Travere, J.M.; Moncada, V.; Bremond, F.

    2011-01-01

    In this paper, we promote intelligent plasma facing component video monitoring for both real-time purposes (machine protection issues) and post event analysis purposes (plasma-wall interaction understanding). We propose a vision-based system able to automatically detect and classify into different pre-defined categories thermal phenomena such as localized hot spots or transient thermal events (e.g. electrical arcing) from infrared imaging data of PFCs. This original computer vision system is made intelligent by endowing it with high level reasoning (i.e. integration of a priori knowledge of thermal event spatio-temporal properties to guide the recognition), self-adaptability to varying conditions (e.g. different thermal scenes and plasma scenarios), and learning capabilities (e.g. statistical modelling of event behaviour based on training samples). (authors)

  16. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  17. Functional Generalized Structured Component Analysis.

    Science.gov (United States)

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  18. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  19. Analysis of the frequency components of X-ray images

    International Nuclear Information System (INIS)

    Matsuo, Satoru; Komizu, Mitsuru; Kida, Tetsuo; Noma, Kazuo; Hashimoto, Keiji; Onishi, Hideo; Masuda, Kazutaka

    1997-01-01

    We examined the relation between the frequency components of x-ray images of the chest and phalanges and their read sizes for digitizing. Images of the chest and phalanges were radiographed using three types of screens and films, and the noise images in background density were digitized with a drum scanner, changing the read sizes. The frequency components for these images were evaluated by converting them to the secondary Fourier to obtain the power spectrum and signal to noise ratio (SNR). After changing the cut-off frequency on the power spectrum to process a low pass filter, we also examined the frequency components of the images in relation to the normalized mean square error (NMSE) for the image converted to reverse Fourier and the original image. Results showed that the frequency components were 2.0 cycles/mm for the chest image and 6.0 cycles/mm for the phalanges. Therefore, it is necessary to collect data applying the read sizes of 200 μm and 50 μm for the chest and phalangeal images, respectively, in order to digitize these images without loss of their frequency components. (author)

  20. Independent Component Analysis and Time-Frequency Masking for Speech Recognition in Multitalker Conditions

    Directory of Open Access Journals (Sweden)

    Reinhold Orglmeister

    2010-01-01

    Full Text Available When a number of speakers are simultaneously active, for example in meetings or noisy public places, the sources of interest need to be separated from interfering speakers and from each other in order to be robustly recognized. Independent component analysis (ICA has proven a valuable tool for this purpose. However, ICA outputs can still contain strong residual components of the interfering speakers whenever noise or reverberation is high. In such cases, nonlinear postprocessing can be applied to the ICA outputs, for the purpose of reducing remaining interferences. In order to improve robustness to the artefacts and loss of information caused by this process, recognition can be greatly enhanced by considering the processed speech feature vector as a random variable with time-varying uncertainty, rather than as deterministic. The aim of this paper is to show the potential to improve recognition of multiple overlapping speech signals through nonlinear postprocessing together with uncertainty-based decoding techniques.

  1. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  2. Independent component analysis reveals new and biologically significant structures in micro array data

    Directory of Open Access Journals (Sweden)

    Veerla Srinivas

    2006-06-01

    Full Text Available Abstract Background An alternative to standard approaches to uncover biologically meaningful structures in micro array data is to treat the data as a blind source separation (BSS problem. BSS attempts to separate a mixture of signals into their different sources and refers to the problem of recovering signals from several observed linear mixtures. In the context of micro array data, "sources" may correspond to specific cellular responses or to co-regulated genes. Results We applied independent component analysis (ICA to three different microarray data sets; two tumor data sets and one time series experiment. To obtain reliable components we used iterated ICA to estimate component centrotypes. We found that many of the low ranking components indeed may show a strong biological coherence and hence be of biological significance. Generally ICA achieved a higher resolution when compared with results based on correlated expression and a larger number of gene clusters with significantly enriched for gene ontology (GO categories. In addition, components characteristic for molecular subtypes and for tumors with specific chromosomal translocations were identified. ICA also identified more than one gene clusters significant for the same GO categories and hence disclosed a higher level of biological heterogeneity, even within coherent groups of genes. Conclusion Although the ICA approach primarily detects hidden variables, these surfaced as highly correlated genes in time series data and in one instance in the tumor data. This further strengthens the biological relevance of latent variables detected by ICA.

  3. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  4. Multi-spectrometer calibration transfer based on independent component analysis.

    Science.gov (United States)

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  5. Application of Principal Component Analysis (PCA) to Reduce Multicollinearity Exchange Rate Currency of Some Countries in Asia Period 2004-2014

    Science.gov (United States)

    Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad

    2017-01-01

    This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…

  6. Independent component analysis applied to pulse oximetry in the estimation of the arterial oxygen saturation (SpO2) - a comparative study

    DEFF Research Database (Denmark)

    Jensen, Thomas; Duun, Sune Bro; Larsen, Jan

    2009-01-01

    We examine various independent component analysis (ICA) digital signal processing algorithms for estimating the arterial oxygen saturation (SpO2) as measured by a reflective pulse oximeter. The ICA algorithms examined are FastICA, Maximum Likelihood ICA (ICAML), Molgedey and Schuster ICA (ICAMS......), and Mean Field ICA (ICAMF). The signal processing includes pre-processing bandpass filtering to eliminate noise, and post-processing by calculating the SpO2. The algorithms are compared to the commercial state-of-the-art algorithm Discrete Saturation Transform (DST) by Masimo Corporation...

  7. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  8. Derivation of design response spectra for analysis and testing of components and systems

    International Nuclear Information System (INIS)

    Krutzik, N.

    1996-01-01

    Some institutions participating in the Benchmark Project performed parallel calculations for the WWER-1000 Kozloduy NPP. The investigations were based on various mathematical models and procedures for consideration of soil-structure interaction effects, simultaneously applying uniform soil dynamic and seismological input data. The methods, mathematical models and dynamic response results were evaluated and discussed in detail and finally compared by means of different structural models and soil representations with the aim of deriving final enveloped and smoothed dynamic response data (benchmark response spectra). This should be used for requalification by analysis testing of the mechanical and electrical components and systems located in this type of reactor building

  9. Applying Standard Industrial Components for Active Magnetic Bearings

    Directory of Open Access Journals (Sweden)

    Bert-Uwe Koehler

    2017-02-01

    Full Text Available With the increasing number of active magnetic bearing applications, satisfying additional requirements is becoming increasingly more important. As for every technology, moving away from being a niche product and achieving a higher level of maturity, these requirements relate to robustness, reliability, availability, safety, security, traceability, certification, handling, flexibility, reporting, costs, and delivery times. Employing standard industrial components, such as those from flexible modular motion control drive systems, is an approach that allows these requirements to be satisfied while achieving rapid technological innovation. In this article, we discuss technical and non-technical aspects of using standard industrial components in magnetic bearing applications.

  10. Effectiveness of thermoluminescence analysis to detect low quantity of gamma-irradiated component in non-irradiated mushroom powders

    International Nuclear Information System (INIS)

    Akram, Kashif; Ahn, Jae-Jun; Shahbaz, Hafiz Muhammad; Jo, Deokjo; Kwon, Joong-Ho

    2013-01-01

    Gamma-irradiated (0–10 kGy) dried mushrooms (Lentinus edodes) powders were mixed at different ratios (1–10%) in the non-irradiated samples and investigated using photostimulated-luminescence (PSL), electron spin resonance (ESR) and thermoluminescence (TL) techniques. The PSL results were negative for all samples at 1% mixing ratio, whereas intermediate results were observed for the samples containing 5% or 10% irradiated component with the exception (positive) of 10% mixing of 10 kGy-irradiated sample. The ESR analysis showed the presence of crystalline sugar radicals in the irradiated samples but the radiation-specific spectral features were absent in the mixed samples. TL analysis showed the radiation-specific TL glow curves; however, the complicated results were observed at 1% mixing of 2 and 5 kGy-irradiated samples, which required careful evaluations to draw the final conclusion about the irradiation status of the samples. TL ratios could only confirm the results of samples with 5% and 10% mixing of 10 kGy, and 10% mixing of 5 kGy-irradiated components. SEM-EDX analysis showed that feldspar and quartz were major contaminating minerals, responsible for the radiation-specific luminescence characteristics. -- Highlights: ► Detection of irradiated food is important to enforce the applied regulations. ► The effectiveness of TL analysis was investigated to detect irradiated component. ► The TL results were compared with those from PSL and ESR analysis. ► TL analysis was most effective to characterize the irradiation status of samples. ► SEM-EDX analysis showed feldspar and quartz as the main source of TL properties

  11. Effectiveness of thermoluminescence analysis to detect low quantity of gamma-irradiated component in non-irradiated mushroom powders

    Energy Technology Data Exchange (ETDEWEB)

    Akram, Kashif [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Institute of Food Science and Nutrition, University of Sargodha, Sargodha 40100 (Pakistan); Ahn, Jae-Jun [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Shahbaz, Hafiz Muhammad [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Institute of Food Science and Nutrition, University of Sargodha, Sargodha 40100 (Pakistan); Jo, Deokjo [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Kwon, Joong-Ho, E-mail: jhkwon@knu.ac.kr [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of)

    2013-04-15

    Gamma-irradiated (0–10 kGy) dried mushrooms (Lentinus edodes) powders were mixed at different ratios (1–10%) in the non-irradiated samples and investigated using photostimulated-luminescence (PSL), electron spin resonance (ESR) and thermoluminescence (TL) techniques. The PSL results were negative for all samples at 1% mixing ratio, whereas intermediate results were observed for the samples containing 5% or 10% irradiated component with the exception (positive) of 10% mixing of 10 kGy-irradiated sample. The ESR analysis showed the presence of crystalline sugar radicals in the irradiated samples but the radiation-specific spectral features were absent in the mixed samples. TL analysis showed the radiation-specific TL glow curves; however, the complicated results were observed at 1% mixing of 2 and 5 kGy-irradiated samples, which required careful evaluations to draw the final conclusion about the irradiation status of the samples. TL ratios could only confirm the results of samples with 5% and 10% mixing of 10 kGy, and 10% mixing of 5 kGy-irradiated components. SEM-EDX analysis showed that feldspar and quartz were major contaminating minerals, responsible for the radiation-specific luminescence characteristics. -- Highlights: ► Detection of irradiated food is important to enforce the applied regulations. ► The effectiveness of TL analysis was investigated to detect irradiated component. ► The TL results were compared with those from PSL and ESR analysis. ► TL analysis was most effective to characterize the irradiation status of samples. ► SEM-EDX analysis showed feldspar and quartz as the main source of TL properties.

  12. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  13. Automatic scatter detection in fluorescence landscapes by means of spherical principal component analysis

    DEFF Research Database (Denmark)

    Kotwa, Ewelina Katarzyna; Jørgensen, Bo Munk; Brockhoff, Per B.

    2013-01-01

    In this paper, we introduce a new method, based on spherical principal component analysis (S‐PCA), for the identification of Rayleigh and Raman scatters in fluorescence excitation–emission data. These scatters should be found and eliminated as a prestep before fitting parallel factor analysis...... models to the data, in order to avoid model degeneracies. The work is inspired and based on a previous research, where scatter removal was automatic (based on a robust version of PCA called ROBPCA) and required no visual data inspection but appeared to be computationally intensive. To overcome...... this drawback, we implement the fast S‐PCA in the scatter identification routine. Moreover, an additional pattern interpolation step that complements the method, based on robust regression, will be applied. In this way, substantial time savings are gained, and the user's engagement is restricted to a minimum...

  14. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  15. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  16. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  17. Independent principal component analysis for simulation of soil water content and bulk density in a Canadian Watershed

    Directory of Open Access Journals (Sweden)

    Alaba Boluwade

    2016-09-01

    Full Text Available Accurate characterization of soil properties such as soil water content (SWC and bulk density (BD is vital for hydrologic processes and thus, it is importance to estimate θ (water content and ρ (soil bulk density among other soil surface parameters involved in water retention and infiltration, runoff generation and water erosion, etc. The spatial estimation of these soil properties are important in guiding agricultural management decisions. These soil properties vary both in space and time and are correlated. Therefore, it is important to find an efficient and robust technique to simulate spatially correlated variables. Methods such as principal component analysis (PCA and independent component analysis (ICA can be used for the joint simulations of spatially correlated variables, but they are not without their flaws. This study applied a variant of PCA called independent principal component analysis (IPCA that combines the strengths of both PCA and ICA for spatial simulation of SWC and BD using the soil data set from an 11 km2 Castor watershed in southern Quebec, Canada. Diagnostic checks using the histograms and cumulative distribution function (cdf both raw and back transformed simulations show good agreement. Therefore, the results from this study has potential in characterization of water content variability and bulk density variation for precision agriculture.

  18. Characterizing functional connectivity during rest in multiple sclerosis patients versus healthy volunteers using independent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Palacio Garcia, L.; Andrzejak, R.; Prchkovska, V.; Rodrigues, P.

    2016-07-01

    It is commonly thought that our brain is not active when it does not receive any external input. However, during rest, there are still certain distant regions of the brain that are functionally correlated between them: the so-called resting-state networks. This functional connectivity of the brain is disrupted in many neurological diseases. In particular, it has been shown that one of the most studied resting-state networks (the default-mode network) is affected in multiple sclerosis, which is the most common disabling neurological condition affecting the central nervous system of young adults. In this work, I focus on the study of the differences in the resting-state networks between multiple sclerosis patients and healthy volunteers. In order to study the effects of multiple sclerosis on the functional connectivity of the brain, a numerical method known as independent component analysis (ICA) is applied. This technique divides the resting-state fMRI data into independent components. Nonetheless, noise, which could be due to head motion or physiological artifacts, may corrupt the data by indicating a false activation. Therefore, I create a web user interface that allows the user to manually classify all the independent components for a given subject. Eventually, the components classified as noise should be removed from the functional data in order to prevent them from taking part in any further analysis. (Author)

  19. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  20. Variability search in M 31 using principal component analysis and the Hubble Source Catalogue

    Science.gov (United States)

    Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.

    2018-06-01

    Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.

  1. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  2. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Oil classification using X-ray scattering and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T., E-mail: dani.almeida84@gmail.com, E-mail: ricardo@lin.ufrj.br, E-mail: amandass@bioqmed.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil); Oliveira, Davi F.; Anjos, Marcelino J., E-mail: davi.oliveira@uerj.br, E-mail: marcelin@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica Armando Dias Tavares

    2015-07-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  4. Oil classification using X-ray scattering and principal component analysis

    International Nuclear Information System (INIS)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T.; Oliveira, Davi F.; Anjos, Marcelino J.

    2015-01-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  5. Improving the use of principal component analysis to reduce physiological noise and motion artifacts to increase the sensitivity of task-based fMRI.

    Science.gov (United States)

    Soltysik, David A; Thomasson, David; Rajan, Sunder; Biassou, Nadia

    2015-02-15

    Functional magnetic resonance imaging (fMRI) time series are subject to corruption by many noise sources, especially physiological noise and motion. Researchers have developed many methods to reduce physiological noise, including RETROICOR, which retroactively removes cardiac and respiratory waveforms collected during the scan, and CompCor, which applies principal components analysis (PCA) to remove physiological noise components without any physiological monitoring during the scan. We developed four variants of the CompCor method. The optimized CompCor method applies PCA to time series in a noise mask, but orthogonalizes each component to the BOLD response waveform and uses an algorithm to determine a favorable number of components to use as "nuisance regressors." Whole brain component correction (WCompCor) is similar, except that it applies PCA to time-series throughout the whole brain. Low-pass component correction (LCompCor) identifies low-pass filtered components throughout the brain, while high-pass component correction (HCompCor) identifies high-pass filtered components. We compared the new methods with the original CompCor method by examining the resulting functional contrast-to-noise ratio (CNR), sensitivity, and specificity. (1) The optimized CompCor method increased the CNR and sensitivity compared to the original CompCor method and (2) the application of WCompCor yielded the best improvement in the CNR and sensitivity. The sensitivity of the optimized CompCor, WCompCor, and LCompCor methods exceeded that of the original CompCor method. However, regressing noise signals showed a paradoxical consequence of reducing specificity for all noise reduction methods attempted. Published by Elsevier B.V.

  6. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    Science.gov (United States)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i

  7. Experimental stress analysis for determination of residual stresses and integrity monitoring of components and systems

    International Nuclear Information System (INIS)

    1993-01-01

    For an analysis of the safety-related significance of residual stresses, mechanical, magnetic as well as ultrasonic and diffraction methods can be applied as testing methods. The results of an interlaboratory test concerning the experimental determination of residual stresses in a railway track are included. Further, questions are analyzed concerning the in-service inspections of components and systems with regard to their operational safety and life. Measurement methods are explained by examples from power plant engineering, nuclear power plant engineering, construction and traffic engineering as well as aeronautics. (DG) [de

  8. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  9. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  10. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  11. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  12. Principal Components Analysis on the spectral Bidirectional Reflectance Distribution Function of ceramic colour standards.

    Science.gov (United States)

    Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A

    2011-09-26

    The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America

  13. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  14. Ancestry inference using principal component analysis and spatial analysis: a distance-based analysis to account for population substructure.

    Science.gov (United States)

    Byun, Jinyoung; Han, Younghun; Gorlov, Ivan P; Busam, Jonathan A; Seldin, Michael F; Amos, Christopher I

    2017-10-16

    Accurate inference of genetic ancestry is of fundamental interest to many biomedical, forensic, and anthropological research areas. Genetic ancestry memberships may relate to genetic disease risks. In a genome association study, failing to account for differences in genetic ancestry between cases and controls may also lead to false-positive results. Although a number of strategies for inferring and taking into account the confounding effects of genetic ancestry are available, applying them to large studies (tens thousands samples) is challenging. The goal of this study is to develop an approach for inferring genetic ancestry of samples with unknown ancestry among closely related populations and to provide accurate estimates of ancestry for application to large-scale studies. In this study we developed a novel distance-based approach, Ancestry Inference using Principal component analysis and Spatial analysis (AIPS) that incorporates an Inverse Distance Weighted (IDW) interpolation method from spatial analysis to assign individuals to population memberships. We demonstrate the benefits of AIPS in analyzing population substructure, specifically related to the four most commonly used tools EIGENSTRAT, STRUCTURE, fastSTRUCTURE, and ADMIXTURE using genotype data from various intra-European panels and European-Americans. While the aforementioned commonly used tools performed poorly in inferring ancestry from a large number of subpopulations, AIPS accurately distinguished variations between and within subpopulations. Our results show that AIPS can be applied to large-scale data sets to discriminate the modest variability among intra-continental populations as well as for characterizing inter-continental variation. The method we developed will protect against spurious associations when mapping the genetic basis of a disease. Our approach is more accurate and computationally efficient method for inferring genetic ancestry in the large-scale genetic studies.

  15. Preliminary study of soil permeability properties using principal component analysis

    Science.gov (United States)

    Yulianti, M.; Sudriani, Y.; Rustini, H. A.

    2018-02-01

    Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.

  16. Common cause failures of reactor pressure components

    International Nuclear Information System (INIS)

    Mankamo, T.

    1978-01-01

    The common cause failure is defined as a multiple failure event due to a common cause. The existence of common failure causes may ruin the potential advantages of applying redundancy for reliability improvement. Examples relevant to large mechanical components are presented. Preventive measures against common cause failures, such as physical separation, equipment diversity, quality assurance, and feedback from experience are discussed. Despite the large number of potential interdependencies, the analysis of common cause failures can be done within the framework of conventional reliability analysis, utilizing, for example, the method of deriving minimal cut sets from a system fault tree. Tools for the description and evaluation of dependencies between components are discussed: these include the model of conditional failure causes that are common to many components, and evaluation of the reliability of redundant components subjected to a common load. (author)

  17. The baking analysis for vacuum vessel and plasma facing components of the KSTAR tokamak

    International Nuclear Information System (INIS)

    Lee, K. H.; Woo, H. K.; Im, K. H.; Cho, S. Y.; Kim, J. B.

    2000-01-01

    The base pressure of vacuum vessel of the KSTAR (Korea Superconducting Tokamak Advanced Research) Tokamak is to be a ultra high vacuum, 10 -6 ∼10 -7 Pa, to produce clean plasma with low impurity containments. For this purpose, the KSTAR vacuum vessel and plasma facing components need to be baked up to at least 250 .deg. C, 350 .deg. C respectively, within 24 hours by hot nitrogen gas from a separate baking/cooling line system to remove impurities from the plasma-material interaction surfaces before plasma operation. Here by applying the implicit numerical method to the heat balance equations of the system, overall temperature distributions of the KSTAR vacuum vessel and plasma facing components are obtained during the whole baking process. The model for 2-dimensional baking analysis are segmented into 9 imaginary sectors corresponding to each plasma facing component and has up-down symmetry. Under the resulting combined loads including dead weight, baking gas pressure, vacuum pressure and thermal loads, thermal stresses in the vacuum vessel during bakeout are calculated by using the ANSYS code. It is found that the vacuum vessel and its supports are structurally rigid based on the thermal stress analyses

  18. The baking analysis for vacuum vessel and plasma facing components of the KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K. H.; Woo, H. K. [Chungnam National Univ., Taejon (Korea, Republic of); Im, K. H.; Cho, S. Y. [korea Basic Science Institute, Taejon (Korea, Republic of); Kim, J. B. [Hyundai Heavy Industries Co., Ltd., Ulsan (Korea, Republic of)

    2000-07-01

    The base pressure of vacuum vessel of the KSTAR (Korea Superconducting Tokamak Advanced Research) Tokamak is to be a ultra high vacuum, 10{sup -6}{approx}10{sup -7}Pa, to produce clean plasma with low impurity containments. For this purpose, the KSTAR vacuum vessel and plasma facing components need to be baked up to at least 250 .deg. C, 350 .deg. C respectively, within 24 hours by hot nitrogen gas from a separate baking/cooling line system to remove impurities from the plasma-material interaction surfaces before plasma operation. Here by applying the implicit numerical method to the heat balance equations of the system, overall temperature distributions of the KSTAR vacuum vessel and plasma facing components are obtained during the whole baking process. The model for 2-dimensional baking analysis are segmented into 9 imaginary sectors corresponding to each plasma facing component and has up-down symmetry. Under the resulting combined loads including dead weight, baking gas pressure, vacuum pressure and thermal loads, thermal stresses in the vacuum vessel during bakeout are calculated by using the ANSYS code. It is found that the vacuum vessel and its supports are structurally rigid based on the thermal stress analyses.

  19. Component analysis of a mixed beam generated by vacuum electrospray of an ionic liquid

    International Nuclear Information System (INIS)

    Fujiwara, Yukio; Saito, Naoaki; Nonaka, Hidehiko; Ichimura, Shingo

    2012-01-01

    Vacuum electrospray of a quaternary ammonium ionic liquid, N,N-diethyl-N-methyl-N-(2-methoxyethyl)ammonium bis(trifluoromethanesulfonyl) amide (DEME-TFSA), was investigated to develop a primary ion source for secondary ion mass spectrometry (SIMS). Since the ionic liquid contains many methyl and ethyl groups as well as protons, its beam is expected to efficiently produce protonated molecules for SIMS analysis of organic materials. Experimental results showed that the beam consisted of charged particles of m/z about 1000 and charged droplets of m/z > 10 5 . The current components of both the charged particles and droplets changed with the applied voltage and the flow rate of the ionic liquid. With decreasing flow rate, the current component of the charged droplets increased, whereas that of the charged particles decreased. The m/z values of the charged droplets diminished with decreasing flow rate and increasing capillary voltage. In addition to masses and charge numbers, the numbers of the charged droplets and the charged particles were estimated.

  20. Synergetic Use of Principal Component Analysis Applied to Normed Physicochemical Measurements and GC × GC-MS to Reveal the Stabilization Effect of Selected Essential Oils on Heated Rapeseed Oil.

    Science.gov (United States)

    Sghaier, Lilia; Cordella, Christophe B Y; Rutledge, Douglas N; Lefèvre, Fanny; Watiez, Mickaël; Breton, Sylvie; Sassiat, Patrick; Thiebaut, Didier; Vial, Jérôme

    2017-06-01

    Lipid oxidation leads to the formation of volatile compounds and very often to off-flavors. In the case of the heating of rapeseed oil, unpleasant odors, characterized as a fishy odor, are emitted. In this study, 2 different essential oils (coriander and nutmeg essential oils) were added to refined rapeseed oil as odor masking agents. The aim of this work was to determine a potential antioxidant effect of these essential oils on the thermal stability of rapeseed oil subject to heating cycles between room temperature and 180 °C. For this purpose, normed determinations of different parameters (peroxide value, anisidine value, and the content of total polar compounds, free fatty acids and tocopherols) were carried out to examine the differences between pure and degraded oil. No significant difference was observed between pure rapeseed oil and rapeseed oil with essential oils for each parameter separately. However, a stabilizing effect of the essential oils, with a higher effect for the nutmeg essential oil was highlighted by principal component analysis applied on physicochemical dataset. Moreover, the analysis of the volatile compounds performed by GC × GC showed a substantial loss of the volatile compounds of the essential oils from the first heating cycle. © 2017 Institute of Food Technologists®.

  1. Performance assessment of air quality monitoring networks using principal component analysis and cluster analysis

    International Nuclear Information System (INIS)

    Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun

    2011-01-01

    This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)

  2. Raman spectroscopy and capillary electrophoresis applied to forensic colour inkjet printer inks analysis.

    Science.gov (United States)

    Król, Małgorzata; Karoly, Agnes; Kościelniak, Paweł

    2014-09-01

    Forensic laboratories are increasingly engaged in the examination of fraudulent documents, and what is important, in many cases these are inkjet-printed documents. That is why systematic approaches to inkjet printer inks comparison and identification have been carried out by both non-destructive and destructive methods. In this study, micro-Raman spectroscopy and capillary electrophoresis (CE) were applied to the analysis of colour inkjet printer inks. Micro-Raman spectroscopy was used to study the chemical composition of colour inks in situ on a paper surface. It helps to characterize and differentiate inkjet inks, and can be used to create a spectra database of inks taken from different cartridge brands and cartridge numbers. Capillary electrophoresis in micellar electrophoretic capillary chromatography mode was applied to separate colour and colourless components of inks, enabling group identification of those components which occur in a sufficient concentration (giving intensive peaks). Finally, on the basis of the obtained results, differentiation of the analysed inks was performed. Twenty-three samples of inkjet printer inks were examined and the discriminating power (DP) values for both presented methods were established in the routine work of experts during the result interpretation step. DP was found to be 94.0% (Raman) and 95.6% (CE) when all the analysed ink samples were taken into account, and it was 96.7% (Raman) and 98.4% (CE), when only cartridges with different index numbers were considered. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  4. Detailed analysis of surface asperity deformation mechanism in diffusion bonding of steel hollow structural components

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, C. [School of Materials Science and Engineering, Northwestern Polytechnical University, Xi’an 710072 (China); Laboratoire de Mecanique des Contacts et des Structures (LaMCoS), INSA Lyon, 20 Avenue des Sciences, F-69621 Villeurbanne Cedex (France); Li, H. [School of Materials Science and Engineering, Northwestern Polytechnical University, Xi’an 710072 (China); Li, M.Q., E-mail: zc9997242256@126.com [School of Materials Science and Engineering, Northwestern Polytechnical University, Xi’an 710072 (China)

    2016-05-15

    Graphical abstract: This study focused on the detailed analysis of surface asperity deformation mechanism in diffusion bonding of steel hollow structural component. A special surface with regular patterns was processed to be joined so as to observe the extent of surface asperity deformation under different applied bonding pressures. Fracture surface characteristic combined with surface roughness profiles distinctly revealed the enhanced surface asperity deformation as the applied pressure increases. The influence of surface asperity deformation mechanism on joint formation was analyzed: (a) surface asperity deformation not only directly expanded the interfacial contact areas, but also released deformation heat and caused defects, indirectly accelerating atomic diffusion, then benefits to void shrinkage; (b) surface asperity deformation readily introduced stored energy difference between two opposite sides of interface grain boundary, resulting in strain induced interface grain boundary migration. In addition, the influence of void on interface grain boundary migration was analyzed in detail. - Highlights: • A high quality hollow structural component has been fabricated by diffusion bonding. • Surface asperity deformation not only expands the interfacial contact areas, but also causes deformation heat and defects to improve the atomic diffusion. • Surface asperity deformation introduces the stored energy difference between the two opposite sides of interface grain boundary, leading to strain induced interface grain boundary migration. • The void exerts a dragging force on the interface grain boundary to retard or stop interface grain boundary migration. - Abstract: This study focused on the detailed analysis of surface asperity deformation mechanism in similar diffusion bonding as well as on the fabrication of high quality martensitic stainless steel hollow structural components. A special surface with regular patterns was processed to be joined so as to

  5. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  6. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  7. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  8. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  9. New trends in applied harmonic analysis sparse representations, compressed sensing, and multifractal analysis

    CERN Document Server

    Cabrelli, Carlos; Jaffard, Stephane; Molter, Ursula

    2016-01-01

    This volume is a selection of written notes corresponding to courses taught at the CIMPA School: "New Trends in Applied Harmonic Analysis: Sparse Representations, Compressed Sensing and Multifractal Analysis". New interactions between harmonic analysis and signal and image processing have seen striking development in the last 10 years, and several technological deadlocks have been solved through the resolution of deep theoretical problems in harmonic analysis. New Trends in Applied Harmonic Analysis focuses on two particularly active areas that are representative of such advances: multifractal analysis, and sparse representation and compressed sensing. The contributions are written by leaders in these areas, and covers both theoretical aspects and applications. This work should prove useful not only to PhD students and postdocs in mathematics and signal and image processing, but also to researchers working in related topics.

  10. Development of a simple method for classifying the degree of importance of components in nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2006-01-01

    In order to analyze large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a method of quantitatively and simply judging the significance of trouble information of overseas nuclear power plants was developed. (author)

  11. IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  12. Topological data analysis (TDA) applied to reveal pedogenetic principles of European topsoil system.

    Science.gov (United States)

    Savic, Aleksandar; Toth, Gergely; Duponchel, Ludovic

    2017-05-15

    Recent developments in applied mathematics are bringing new tools that are capable to synthesize knowledge in various disciplines, and help in finding hidden relationships between variables. One such technique is topological data analysis (TDA), a fusion of classical exploration techniques such as principal component analysis (PCA), and a topological point of view applied to clustering of results. Various phenomena have already received new interpretations thanks to TDA, from the proper choice of sport teams to cancer treatments. For the first time, this technique has been applied in soil science, to show the interaction between physical and chemical soil attributes and main soil-forming factors, such as climate and land use. The topsoil data set of the Land Use/Land Cover Area Frame survey (LUCAS) was used as a comprehensive database that consists of approximately 20,000 samples, each described by 12 physical and chemical parameters. After the application of TDA, results obtained were cross-checked against known grouping parameters including five types of land cover, nine types of climate and the organic carbon content of soil. Some of the grouping characteristics observed using standard approaches were confirmed by TDA (e.g., organic carbon content) but novel subtle relationships (e.g., magnitude of anthropogenic effect in soil formation), were discovered as well. The importance of this finding is that TDA is a unique mathematical technique capable of extracting complex relations hidden in soil science data sets, giving the opportunity to see the influence of physicochemical, biotic and abiotic factors on topsoil formation through fresh eyes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    Science.gov (United States)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  14. Comparative analysis of quality assurance requirements for selected LMFBR components of classes 1, 2 and 3

    International Nuclear Information System (INIS)

    Kleinert, K.P.

    1992-01-01

    The study analyses and compares German, French, British and Italian practices and procedures applied for various LMFBR projects both related to the quality assurance system and related to the particular type of class of component:Class 1: primary reactor vessel; Class 2: Secondary sodium pump; Class 3: Primary cold trap. Various areas of analysis and comparison were selected to identify the underlying concepts of grading of requirements and measures, to identify the similarities and differences, and to give recommendations for further actions concerning quality assurance requirements 60 refs., 21 tabs., 6 figs

  15. An eco design strategy for high pressure die casting components: microstructural analysis applied to mass reducing processes

    International Nuclear Information System (INIS)

    Suarez-Pena, B.; Asensio-Lozano, J.

    2009-01-01

    In this work the study focused on the possibility of use of new aluminium alloys with optimized microstructures that ensure the mechanical properties requested for cast components made by high pressure die casting. The objective was to check the possibility of manufacture of structurally sound eco-steps for escalators with reduced structural integrity. The former arises as a result of a new redesign of the traditional steps aiming at a significant weight reduction. The experimental results show that it is feasible to cut the use of materials during processing and therefore to reduce the impact of the components during its lifetime, whilst the performance and safety standards are kept identical or even improved. (Author) 17 refs

  16. Detection and Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of GPS Time Series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2014-12-01

    A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle

  17. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  18. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  19. Analysis of diffusivity of the oscillating reaction components in a microreactor system

    Directory of Open Access Journals (Sweden)

    Martina Šafranko

    2017-01-01

    Full Text Available When performing oscillating reactions, periodical changes in the concentrations of reactants, intermediaries, and products take place. Due to the mentioned periodical changes of the concentrations, the information about the diffusivity of the components included into oscillating reactions is very important for the control of the oscillating reactions. Non-linear dynamics makes oscillating reactions very interesting for analysis in different reactor systems. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor, with the aim of identifying the limiting component. The geometry of the microreactor microchannel and a well defined flow profile ensure optimal conditions for the diffusion phenomena analysis, because diffusion profiles in a microreactor depend only on the residence time. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor equipped with 2 Y-shape inlets and 2 Y-shape outlets, with active volume of V = 4 μL at different residence times.

  20. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  1. Support vector machine and principal component analysis for microarray data classification

    Science.gov (United States)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  2. Improved Principal Component Analysis for Anomaly Detection: Application to an Emergency Department

    KAUST Repository

    Harrou, Fouzi

    2015-07-03

    Monitoring of production systems, such as those in hospitals, is primordial for ensuring the best management and maintenance desired product quality. Detection of emergent abnormalities allows preemptive actions that can prevent more serious consequences. Principal component analysis (PCA)-based anomaly-detection approach has been used successfully for monitoring systems with highly correlated variables. However, conventional PCA-based detection indices, such as the Hotelling’s T2T2 and the Q statistics, are ill suited to detect small abnormalities because they use only information from the most recent observations. Other multivariate statistical metrics, such as the multivariate cumulative sum (MCUSUM) control scheme, are more suitable for detection small anomalies. In this paper, a generic anomaly detection scheme based on PCA is proposed to monitor demands to an emergency department. In such a framework, the MCUSUM control chart is applied to the uncorrelated residuals obtained from the PCA model. The proposed PCA-based MCUSUM anomaly detection strategy is successfully applied to the practical data collected from the database of the pediatric emergency department in the Lille Regional Hospital Centre, France. The detection results evidence that the proposed method is more effective than the conventional PCA-based anomaly-detection methods.

  3. Improved Principal Component Analysis for Anomaly Detection: Application to an Emergency Department

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Chaabane, Sondé s; Tahon, Christian; Sun, Ying

    2015-01-01

    Monitoring of production systems, such as those in hospitals, is primordial for ensuring the best management and maintenance desired product quality. Detection of emergent abnormalities allows preemptive actions that can prevent more serious consequences. Principal component analysis (PCA)-based anomaly-detection approach has been used successfully for monitoring systems with highly correlated variables. However, conventional PCA-based detection indices, such as the Hotelling’s T2T2 and the Q statistics, are ill suited to detect small abnormalities because they use only information from the most recent observations. Other multivariate statistical metrics, such as the multivariate cumulative sum (MCUSUM) control scheme, are more suitable for detection small anomalies. In this paper, a generic anomaly detection scheme based on PCA is proposed to monitor demands to an emergency department. In such a framework, the MCUSUM control chart is applied to the uncorrelated residuals obtained from the PCA model. The proposed PCA-based MCUSUM anomaly detection strategy is successfully applied to the practical data collected from the database of the pediatric emergency department in the Lille Regional Hospital Centre, France. The detection results evidence that the proposed method is more effective than the conventional PCA-based anomaly-detection methods.

  4. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    Science.gov (United States)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources

  5. Discriminatory components retracing strategy for monitoring the preparation procedure of Chinese patent medicines by fingerprint and chemometric analysis.

    Directory of Open Access Journals (Sweden)

    Shuai Yao

    Full Text Available Chinese patent medicines (CPM, generally prepared from several traditional Chinese medicines (TCMs in accordance with specific process, are the typical delivery form of TCMs in Asia. To date, quality control of CPMs has typically focused on the evaluation of the final products using fingerprint technique and multi-components quantification, but rarely on monitoring the whole preparation process, which was considered to be more important to ensure the quality of CPMs. In this study, a novel and effective strategy labeling "retracing" way based on HPLC fingerprint and chemometric analysis was proposed with Shenkang injection (SKI serving as an example to achieve the quality control of the whole preparation process. The chemical fingerprints were established initially and then analyzed by similarity, principal component analysis (PCA and partial least squares-discriminant analysis (PLS-DA to evaluate the quality and to explore discriminatory components. As a result, the holistic inconsistencies of ninety-three batches of SKIs were identified and five discriminatory components including emodic acid, gallic acid, caffeic acid, chrysophanol-O-glucoside, and p-coumaroyl-O-galloyl-glucose were labeled as the representative targets to explain the retracing strategy. Through analysis of the targets variation in the corresponding semi-products (ninety-three batches, intermediates (thirty-three batches, and the raw materials, successively, the origins of the discriminatory components were determined and some crucial influencing factors were proposed including the raw materials, the coextraction temperature, the sterilizing conditions, and so on. Meanwhile, a reference fingerprint was established and subsequently applied to the guidance of manufacturing. It was suggested that the production process should be standardized by taking the concentration of the discriminatory components as the diagnostic marker to ensure the stable and consistent quality for multi

  6. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  7. Specialized data analysis for the Space Shuttle Main Engine and diagnostic evaluation of advanced propulsion system components

    Science.gov (United States)

    1993-01-01

    The Marshall Space Flight Center is responsible for the development and management of advanced launch vehicle propulsion systems, including the Space Shuttle Main Engine (SSME), which is presently operational, and the Space Transportation Main Engine (STME) under development. The SSME's provide high performance within stringent constraints on size, weight, and reliability. Based on operational experience, continuous design improvement is in progress to enhance system durability and reliability. Specialized data analysis and interpretation is required in support of SSME and advanced propulsion system diagnostic evaluations. Comprehensive evaluation of the dynamic measurements obtained from test and flight operations is necessary to provide timely assessment of the vibrational characteristics indicating the operational status of turbomachinery and other critical engine components. Efficient performance of this effort is critical due to the significant impact of dynamic evaluation results on ground test and launch schedules, and requires direct familiarity with SSME and derivative systems, test data acquisition, and diagnostic software. Detailed analysis and evaluation of dynamic measurements obtained during SSME and advanced system ground test and flight operations was performed including analytical/statistical assessment of component dynamic behavior, and the development and implementation of analytical/statistical models to efficiently define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational condition. In addition, the SSME and J-2 data will be applied to develop vibroacoustic environments for advanced propulsion system components, as required. This study will provide timely assessment of engine component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. This contract will be performed through accomplishment of negotiated task orders.

  8. Derivation of the reduced reaction mechanisms of ozone depletion events in the Arctic spring by using concentration sensitivity analysis and principal component analysis

    Directory of Open Access Journals (Sweden)

    L. Cao

    2016-12-01

    unimportant in the concentration sensitivity analysis, additionally nine reactions were indicated to contribute only little to the total response of the system. Thus, they can be eliminated from the original reaction scheme. The results computed by applying the reduced reaction mechanism derived after the principal component analysis agree well with those by using the original reaction scheme. The maximum deviation of the mixing ratio of principal bromine species is found to be less than 10 %, which is guaranteed by the selection criterion adopted in the simplification process. Moreover, it is shown in the principal component analysis that O(1D in the mechanism of ODEs is in quasi-steady state, which enables a following simplification of the reduced reaction mechanism obtained in the present study.

  9. Applied structural and solid mechanics section: 1983 review and 1984 programs

    International Nuclear Information System (INIS)

    Chadha, J.A.

    1984-01-01

    This report reviews briefly the applied research and problem solving work carried out by the Applied Structural and Solid Mechanics Section during 1983. In 1983 there was a strong demand for services in the areas of theroretical and experimental stress analysis, heat transfer analysis, nonlinear analysis, and general structural analyses related to nuclear and thermal power plant, and transmission line components. Development of capabilities in these areas progressed well. Proposed work programs for 1984 are outlined in this report

  10. Applied research of environmental monitoring using instrumental neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Sam; Moon, Jong Hwa; Chung, Young Ju

    1997-08-01

    This technical report is written as a guide book for applied research of environmental monitoring using Instrumental Neutron Activation Analysis. The contents are as followings; sampling and sample preparation as a airborne particulate matter, analytical methodologies, data evaluation and interpretation, basic statistical methods of data analysis applied in environmental pollution studies. (author). 23 refs., 7 tabs., 9 figs.

  11. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    Science.gov (United States)

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  12. Component mode synthesis in structural dynamics

    International Nuclear Information System (INIS)

    Reddy, G.R.; Vaze, K.K.; Kushwaha, H.S.

    1993-01-01

    In seismic analysis of Nuclear Reactor Structures and equipments eigen solution requires large computer time. Component mode synthesis is an efficient technique with which one can evaluate dynamic characteristics of a large structure with minimum computer time. Due to this reason it is possible to do a coupled analysis of structure and equipment which takes into account the interaction effects. Basically in this the method large size structure is divided into small substructures and dynamic characteristics of individual substructure are determined. The dynamic characteristics of entire structure are evaluated by synthesising the individual substructure characteristics. Component mode synthesis has been applied in this paper to the analysis of a tall heavy water upgrading tower. Use of fixed interface normal modes, constrained modes, attachment modes in the component mode synthesis using energy principle and using Ritz vectors have been discussed. The validity of this method is established by solving fixed-fixed beam and comparing the results obtained by conventional and classical method. The eigen value problem has been solved using simultaneous iteration method. (author)

  13. Aging evaluation of active components by using performance evaluation

    International Nuclear Information System (INIS)

    Jung, S. K.; Jin, T. E.; Kim, J. S.; Jung, I. S.; Kim, T. R.

    2003-01-01

    Risk analysis and performance evaluation methodology were applied to the aging evaluation of active components in the periodic safety review of Wolsung unit 1. We conclude that evaluation of performance is more effective to discriminate the aging degradation of active component than the evaluation of aging mechanism. It is essential to analyze the common cause failures of low performance components to evaluate the properness of present maintenance system. Past 10 years failure history is used for establishing the performance criteria. Past 2 years failure history is used for the evaluating the recent performance condition. We analyze the failure mode of the components to improve the maintenance system. Performance evaluation methodology is useful for the quantitative evaluation of aging degradation of active components. Analysis on the repeated failures can be useful for the feedback to maintenance plan and interval

  14. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  15. The baking analysis for vacuum vessel and plasma facing components of the KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K.H. [Chungnam National University Graduate School, Taejeon (Korea); Im, K.H.; Cho, S.Y. [Korea Basic Science Institute, Taejeon (Korea); Kim, J.B. [Hyundai Heavy Industries Co., Ltd. (Korea); Woo, H.K. [Chungnam National University, Taejeon (Korea)

    2000-11-01

    The base pressure of vacuum vessel of the KSTAR (Korea Superconducting Tokamak Advanced Research) Tokamak is to be a ultra high vacuum, 10{sup -6} {approx} 10{sup -7} Pa, to produce clean plasma with low impurity containments. for this purpose, the KSTAR vacuum vessel and plasma facing components need to be baked up to at least 250 deg.C, 350 deg.C respectively, within 24 hours by hot nitrogen gas from a separate baking/cooling line system to remove impurities from the plasma-material interaction surfaces before plasma operation. Here by applying the implicit numerical method to the heat balance equations of the system, overall temperature distributions of the KSTAR vacuum vessel and plasma facing components are obtained during the whole baking process. The model for 2-dimensional baking analysis are segmented into 9 imaginary sectors corresponding to each plasma facing component and has up-down symmetry. Under the resulting combined loads including dead weight, baking gas pressure, vacuum pressure and thermal loads, thermal stresses in the vacuum vessel during bakeout are calculated by using the ANSYS code. It is found that the vacuum vessel and its supports are structurally rigid based on the thermal stress analyses. (author). 9 refs., 11 figs., 1 tab.

  16. Time-domain ultra-wideband radar, sensor and components theory, analysis and design

    CERN Document Server

    Nguyen, Cam

    2014-01-01

    This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...

  17. Study on determination of durability analysis process and fatigue damage parameter for rubber component

    International Nuclear Information System (INIS)

    Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo

    2011-01-01

    Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed

  18. Applied research of Primary Pump Mission Profile construction

    International Nuclear Information System (INIS)

    Zheng, Gang-yang; Zhang, Zhi-jian; Ye, Quan-liu; Du, Zhi-hao; Ma, Ying-fei; Zhang, Hua-zhi

    2017-01-01

    Highlights: • Minimum Associated Subtask (MAS) and Minimum Effective Component (MEC) are presented in Mission Profile analysis. • Via applying MAS and MEC, Mission Profile plays a more important role in complex system reliability analysis. • Mission Profile has already been used in the reliability analysis of localized Chinese 1000 MW NPP Primary Pump. - Abstract: The traditional Mission Profile analysis did not clarify the accurate concept of minimum subtask and component. However, there are several components, which could be the influencing key element of the system reliability; and there are several subtasks, which could be used as a basic and crucial mission. In this paper, traditional method of Mission Profile has been extended by incorporating two new ideas: Minimum Associated Subtask (MAS) and Minimum Effective Component (MEC). This method of Mission Profile modeling is derived from Chinese 1000 MW NPP Primary Pump localization. A case study on Primary Pump reliability has been presented; then, MAS and MEC have been existed as vital elements in its lifecycle profile construction. By means of MAS and MEC, Mission Profile plays a more important role on complex system (Primary Pump) reliability analysis.

  19. A component analysis of positive behaviour support plans.

    Science.gov (United States)

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  20. Separation of GRACE geoid time-variations using Independent Component Analysis

    Science.gov (United States)

    Frappart, F.; Ramillien, G.; Maisongrande, P.; Bonnet, M.

    2009-12-01

    Independent Component Analysis (ICA) is a blind separation method based on the simple assumptions of the independence of the sources and the non-Gaussianity of the observations. An approach based on this numerical method is used here to extract hydrological signals over land and oceans from the polluting striping noise due to orbit repetitiveness and present in the GRACE global mass anomalies. We took advantage of the availability of monthly Level-2 solutions from three official providers (i.e., CSR, JPL and GFZ) that can be considered as different observations of the same phenomenon. The efficiency of the methodology is first demonstrated on a synthetic case. Applied to one month of GRACE solutions, it allows to clearly separate the total water storage change from the meridional-oriented spurious gravity signals on the continents but not on the oceans. This technique gives results equivalent as the destriping method for continental water storage for the hydrological patterns with less smoothing. This methodology is then used to filter the complete series of the 2002-2009 GRACE solutions.

  1. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  2. Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M

    2016-10-03

    Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.

  3. A systematic concept of assuring structural integrity of components and parts for applying to highly ductile materials through brittle material

    International Nuclear Information System (INIS)

    Suzuki, Kazuhiko

    2007-09-01

    Concepts of assuring structural integrity of plant components have been developed under limited conditions of either highly ductile or brittle materials. There are some cases where operation in more and more severe conditions causes a significant reduction in ductility for materials with a high ductility before service. Use of high strength steels with relatively reduced ductility is increasing as industry applications. Current concepts of structural integrity assurance under the limited conditions of material properties or on the requirement of no significant changes in material properties even after long service will fail to incorporate expected technological innovations. A systematic concept of assuring the structural integrity should be developed for applying to highly ductile materials through brittle materials. Objectives of the on-going research are to propose a detail of the systematic concept by considering how we can develop the concept without restricting materials and for systematic considerations on a broad range of material properties from highly ductile materials through brittle materials. First, background of concepts of existing structural codes for components of highly ductile materials or for structural parts of brittle materials are discussed. Next, issues of existing code for parts of brittle materials are identified, and then resolutions to the issues are proposed. Based on the above-mentioned discussions and proposals, a systematic concept is proposed for application to components with reduced ductility materials and for applying to components of materials with significantly changing material properties due to long service. (author)

  4. The Significance of Regional Analysis in Applied Geography.

    Science.gov (United States)

    Sommers, Lawrence M.

    Regional analysis is central to applied geographic research, contributing to better planning and policy development for a variety of societal problems facing the United States. The development of energy policy serves as an illustration of the capabilities of this type of analysis. The United States has had little success in formulating a national…

  5. Grouping and the pitch of a mistuned fundamental component: Effects of applying simultaneous multiple mistunings to the other harmonics.

    Science.gov (United States)

    Roberts, Brian; Holmes, Stephen D

    2006-12-01

    Mistuning a harmonic produces an exaggerated change in its pitch. This occurs because the component becomes inconsistent with the regular pattern that causes the other harmonics (constituting the spectral frame) to integrate perceptually. These pitch shifts were measured when the fundamental (F0) component of a complex tone (nominal F0 frequency = 200 Hz) was mistuned by +8% and -8%. The pitch-shift gradient was defined as the difference between these values and its magnitude was used as a measure of frame integration. An independent and random perturbation (spectral jitter) was applied simultaneously to most or all of the frame components. The gradient magnitude declined gradually as the degree of jitter increased from 0% to +/-40% of F0. The component adjacent to the mistuned target made the largest contribution to the gradient, but more distant components also contributed. The stimuli were passed through an auditory model, and the exponential height of the F0-period peak in the averaged summary autocorrelation function correlated well with the gradient magnitude. The fit improved when the weighting on more distant channels was attenuated by a factor of three per octave. The results are consistent with a grouping mechanism that computes a weighted average of periodicity strength across several components.

  6. Analysis of Queues with Rational Arrival Process Components - A General Approach

    DEFF Research Database (Denmark)

    Bean, Nigel; Nielsen, Bo Friis

    In a previous paper we demonstrated that the well known matrix-geometric solution of Quasi-Birth-and-Death processes is valid also if we introduce Rational Arrival Process (RAP) components. Here we extend those results and we offer an alternative proof by using results obtained by Tweedie. We prove...... the matrix-geometric form for a certain kind of operators on the stationary measure for discrete time Markov chains of GI/M/1 type. We apply this result to an embedded chain with RAP components. We then discuss the straight- forward modification of the standard algorithms for calculating the matrix R...

  7. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  8. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  9. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  10. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  11. Applied systems analysis. No. 22

    International Nuclear Information System (INIS)

    1980-12-01

    Based on a detailed analysis of demands in the area Cologne/Frankfurt, the amount of the system products for this region were ascertained, which under consideration of technical conditions and entrepreneurial aspects seemed to be disposable at cost equality with competative energy supplies. Based on these data, the technical components of the system, location and piping were fixed and first- and operating costs were determined. For a judgement of the economics, the key numbers, cash value, internal rate of interest and cost recovery rate were determined from the difference of costs between the nuclear long distance energy system and alternative facilities. Furthermore specific production cost, associated prices and contribution margin were presented for each product. (orig.) [de

  12. A Novel Method for Surface Defect Detection of Photovoltaic Module Based on Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Xuewu Zhang

    2013-01-01

    Full Text Available This paper proposed a new method for surface defect detection of photovoltaic module based on independent component analysis (ICA reconstruction algorithm. Firstly, a faultless image is used as the training image. The demixing matrix and corresponding ICs are obtained by applying the ICA in the training image. Then we reorder the ICs according to the range values and reform the de-mixing matrix. Then the reformed de-mixing matrix is used to reconstruct the defect image. The resulting image can remove the background structures and enhance the local anomalies. Experimental results have shown that the proposed method can effectively detect the presence of defects in periodically patterned surfaces.

  13. Principal Component Analysis of Chinese Porcelains from the Five Dynasties to the Qing Dynasty

    Science.gov (United States)

    Yap, C. T.; Hua, Younan

    1992-10-01

    This is a study of the possibility of identifying antique Chinese porcelains according to the period or dynasty, using major and minor chemical components (SiO2 , Al2O3 , Fe2O3 , K2O, Na2O, CaO and MgO) from the body of the porcelain. Principal component analysis is applied to published data on 66 pieces of Chinese procelains made in Jingdezhen during the Five Dynasties and the Song, Yuan, Ming and Qing Dynasties. It is shown that porcelains made during the Five Dynasties and the Yuan (or Ming) and Qing Dynasties can be segregated completely without any overlap. However, there is appreciable overlap between the Five Dynasties and the Song Dynasty, some overlap between the Song and Ming Dynasties and also between the Yuan and Ming Dynasties. Interestingly, Qing procelains are well separated from all the others. The percentage of silica in the porcelain body decreases and that of alumina increases with recentness with the exception of the Yuan and Ming Dynasties, where this trend is reversed.

  14. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  15. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  16. Determination of arterial input function in dynamic susceptibility contrast MRI using group independent component analysis technique

    International Nuclear Information System (INIS)

    Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.

    2006-01-01

    Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study

  17. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    Science.gov (United States)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity

  18. Chemical fingerprinting of terpanes and steranes by chromatographic alignment and principal component analysis

    International Nuclear Information System (INIS)

    Christensen, J.H.; Hansen, A.B.; Andersen, O.

    2005-01-01

    Biomarkers such as steranes and terpanes are abundant in crude oils, particularly in heavy distillate petroleum products. They are useful for matching highly weathered oil samples when other groups of petroleum hydrocarbons fail to distinguish oil samples. In this study, time warping and principal component analysis (PCA) were applied for oil hydrocarbon fingerprinting based on relative amounts of terpane and sterane isomers analyzed by gas chromatography and mass spectrometry. The 4 principal components were boiling point range, clay content, marine or organic terrestrial matter, and maturity based on differences in the terpane and sterane isomer patterns. This study is an extension of a previous fingerprinting study for identifying the sources of oil spill samples based only on the profiles of sterane isomers. Spill samples from the Baltic Carrier oil spill were correctly identified by inspection of score plots. The interpretation of the loading and score plots offered further chemical information about correlations between changes in the amounts of sterane and terpane isomers. It was concluded that this method is an objective procedure for analyzing chromatograms with more comprehensive data usage compared to other fingerprinting methods. 20 refs., 4 figs

  19. Chemical fingerprinting of terpanes and steranes by chromatographic alignment and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, J.H. [Royal Veterinary and Agricultural Univ., Thorvaldsensvej (Denmark). Dept. of Natural Sciences; Hansen, A.B. [National Environmental Research Inst., Roskilde (Denmark). Dept. of Environmental Chemistry and Microbiology; Andersen, O. [Roskilde Univ., Roskilde (Denmark). Dept. of Life Sciences and Chemistry

    2005-07-01

    Biomarkers such as steranes and terpanes are abundant in crude oils, particularly in heavy distillate petroleum products. They are useful for matching highly weathered oil samples when other groups of petroleum hydrocarbons fail to distinguish oil samples. In this study, time warping and principal component analysis (PCA) were applied for oil hydrocarbon fingerprinting based on relative amounts of terpane and sterane isomers analyzed by gas chromatography and mass spectrometry. The 4 principal components were boiling point range, clay content, marine or organic terrestrial matter, and maturity based on differences in the terpane and sterane isomer patterns. This study is an extension of a previous fingerprinting study for identifying the sources of oil spill samples based only on the profiles of sterane isomers. Spill samples from the Baltic Carrier oil spill were correctly identified by inspection of score plots. The interpretation of the loading and score plots offered further chemical information about correlations between changes in the amounts of sterane and terpane isomers. It was concluded that this method is an objective procedure for analyzing chromatograms with more comprehensive data usage compared to other fingerprinting methods. 20 refs., 4 figs.

  20. Applied decision analysis and risk evaluation

    International Nuclear Information System (INIS)

    Ferse, W.; Kruber, S.

    1995-01-01

    During 1994 the workgroup 'Applied Decision Analysis and Risk Evaluation; continued the work on the knowledge based decision support system XUMA-GEFA for the evaluation of the hazard potential of contaminated sites. Additionally a new research direction was started which aims at the support of a later stage of the treatment of contaminated sites: The clean-up decision. For the support of decisions arising at this stage, the methods of decision analysis will be used. Computational aids for evaluation and decision support were implemented and a case study at a waste disposal site in Saxony which turns out to be a danger for the surrounding groundwater ressource was initiated. (orig.)

  1. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  2. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  3. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  4. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    Science.gov (United States)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  5. The role of damage analysis in the assessment of service-exposed components

    International Nuclear Information System (INIS)

    Bendick, W.; Muesch, H.; Weber, H.

    1987-01-01

    Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)

  6. Improvement of retinal blood vessel detection using morphological component analysis.

    Science.gov (United States)

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Animal Research in the "Journal of Applied Behavior Analysis"

    Science.gov (United States)

    Edwards, Timothy L.; Poling, Alan

    2011-01-01

    This review summarizes the 6 studies with nonhuman animal subjects that have appeared in the "Journal of Applied Behavior Analysis" and offers suggestions for future research in this area. Two of the reviewed articles described translational research in which pigeons were used to illustrate and examine behavioral phenomena of applied significance…

  8. SPSS软件中主成分分析的计算技术解析%Analysis of Computing Technology on Principal Components Method in SPSS

    Institute of Scientific and Technical Information of China (English)

    王春枝

    2011-01-01

    In view of the errors in many teaching material and articles about applying SPSS software for principal components analysis, analyzes the basic principles and mathematical process, on this basis, demonstrates the applied progress of principal component an%针对目前很多用SPSS软件进行主成分分析的教材和发表的文章中有不少误解之处.在解析主成分分析的基本原理与数学过程的基础上.结合实例演示应用SPSS软件实现主成分分析的过程。

  9. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  10. Analysis of spiral components in 16 galaxies

    International Nuclear Information System (INIS)

    Considere, S.; Athanassoula, E.

    1988-01-01

    A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies

  11. Independent component and pathway-based analysis of miRNA-regulated gene expression in a model of type 1 diabetes

    Directory of Open Access Journals (Sweden)

    Hagedorn Peter H

    2011-02-01

    Full Text Available Abstract Background Several approaches have been developed for miRNA target prediction, including methods that incorporate expression profiling. However the methods are still in need of improvements due to a high false discovery rate. So far, none of the methods have used independent component analysis (ICA. Here, we developed a novel target prediction method based on ICA that incorporates both seed matching and expression profiling of miRNA and mRNA expressions. The method was applied on a cellular model of type 1 diabetes. Results Microrray profiling identified eight miRNAs (miR-124/128/192/194/204/375/672/708 with differential expression. Applying ICA on the mRNA profiling data revealed five significant independent components (ICs correlating to the experimental conditions. The five ICs also captured the miRNA expressions by explaining >97% of their variance. By using ICA, seven of the eight miRNAs showed significant enrichment of sequence predicted targets, compared to only four miRNAs when using simple negative correlation. The ICs were enriched for miRNA targets that function in diabetes-relevant pathways e.g. type 1 and type 2 diabetes and maturity onset diabetes of the young (MODY. Conclusions In this study, ICA was applied as an attempt to separate the various factors that influence the mRNA expression in order to identify miRNA targets. The results suggest that ICA is better at identifying miRNA targets than negative correlation. Additionally, combining ICA and pathway analysis constitutes a means for prioritizing between the predicted miRNA targets. Applying the method on a model of type 1 diabetes resulted in identification of eight miRNAs that appear to affect pathways of relevance to disease mechanisms in diabetes.

  12. Modern problems in applied analysis

    CERN Document Server

    Rogosin, Sergei

    2018-01-01

    This book features a collection of recent findings in Applied Real and Complex Analysis that were presented at the 3rd International Conference “Boundary Value Problems, Functional Equations and Applications” (BAF-3), held in Rzeszow, Poland on 20-23 April 2016. The contributions presented here develop a technique related to the scope of the workshop and touching on the fields of differential and functional equations, complex and real analysis, with a special emphasis on topics related to boundary value problems. Further, the papers discuss various applications of the technique, mainly in solid mechanics (crack propagation, conductivity of composite materials), biomechanics (viscoelastic behavior of the periodontal ligament, modeling of swarms) and fluid dynamics (Stokes and Brinkman type flows, Hele-Shaw type flows). The book is addressed to all readers who are interested in the development and application of innovative research results that can help solve theoretical and real-world problems.

  13. Fourier convergence analysis applied to neutron diffusion Eigenvalue problem

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook

    2004-01-01

    Fourier error analysis has been a standard technique for the stability and convergence analysis of linear and nonlinear iterative methods. Though the methods can be applied to Eigenvalue problems too, all the Fourier convergence analyses have been performed only for fixed source problems and a Fourier convergence analysis for Eigenvalue problem has never been reported. Lee et al proposed new 2-D/1-D coupling methods and they showed that the new ones are unconditionally stable while one of the two existing ones is unstable at a small mesh size and that the new ones are better than the existing ones in terms of the convergence rate. In this paper the convergence of method A in reference 4 for the diffusion Eigenvalue problem was analyzed by the Fourier analysis. The Fourier convergence analysis presented in this paper is the first one applied to a neutronics eigenvalue problem to the best of our knowledge

  14. Reliability for systems of degrading components with distinct component shock sets

    International Nuclear Information System (INIS)

    Song, Sanling; Coit, David W.; Feng, Qianmei

    2014-01-01

    This paper studies reliability for multi-component systems subject to dependent competing risks of degradation wear and random shocks, with distinct shock sets. In practice, many systems are exposed to distinct and different types of shocks that can be categorized according to their sizes, function, affected components, etc. Previous research primarily focuses on simple systems with independent failure processes, systems with independent component time-to-failure, or components that share the same shock set or type of shocks. In our new model, we classify random shocks into different sets based on their sizes or function. Shocks with specific sizes or function can selectively affect one or more components in the system but not necessarily all components. Additionally the shocks from the different shock sets can arrive at different rates and have different relative magnitudes. Preventive maintenance (PM) optimization is conducted for the system with different component shock sets. Decision variables for two different maintenance scheduling problems, the PM replacement time interval, and the PM inspection time interval, are determined by minimizing a defined system cost rate. Sensitivity analysis is performed to provide insight into the behavior of the proposed maintenance policies. These models can be applied directly or customized for many complex systems that experience dependent competing failure processes with different component shock sets. A MEMS (Micro-electro mechanical systems) oscillator is a typical system subject to dependent and competing failure processes, and it is used as a numerical example to illustrate our new reliability and maintenance models

  15. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  16. Study of Seasonal Variation in Groundwater Quality of Sagar City (India by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Hemant Pathak

    2011-01-01

    Full Text Available Groundwater is one of the major resources of the drinking water in Sagar city (India.. In this study 15 sampling station were selected for the investigations on 14 chemical parameters. The work was carried out during different months of the pre-monsoon, monsoon and post-monsoon seasons in June 2009 to June 2010. The multivariate statistics such as principal component and cluster analysis were applied to the datasets to investigate seasonal variations in groundwater quality. Principal axis factoring has been used to observe the mode of association of parameters and their interrelationships, for evaluating water quality. Average value of BOD, COD, ammonia and iron was high during entire study period. Elevated values of BOD and ammonia in monsoon, slightly more value of BOD in post-monsoon, BOD, ammonia and iron in pre-monsoon period reflected contribution on temporal effect on groundwater. Results of principal component analysis evinced that all the parameters equally and significantly contribute to groundwater quality variations. Factor 1 and factor 2 analysis revealed the DO value deteriorate due to organic load (BOD/Ammonia in different seasons. Hierarchical cluster analysis grouped 15 stations into four clusters in monsoon, five clusters in post-monsoon and five clusters in pre-monsoon with similar water quality features. Clustered group at monsoon, post-monsoon and pre-monsoon consisted one station exhibiting significant spatial variation in physicochemical composition. The anthropogenic nitrogenous species, as fallout from modernization activities. The study indicated that the groundwater sufficiently well oxygenated and nutrient-rich in study places.

  17. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. The structural model includes five lumped masses (floors) connected by four elastic members. The base is supported by three translational springs and two horizontal torsional springs. In general, the mass center and shear center of a building floor are distinct locations. Hence, inertia forces, which act at the mass center, induce twisting in the structure. Through this induced torsion, the lateral (x and y) displacements of the mass elements are coupled. The ground motion components used for this study are artificial earthquake records generated from recorded accelerograms by a spectrum modification technique. The accelerograms have response spectra which are compatible with U.S. NRC Regulatory Guide 1.60. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting modal equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method

  18. Biclustered Independent Component Analysis for Complex Biomarker and Subtype Identification from Structural Magnetic Resonance Images in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Cota Navin Gupta

    2017-09-01

    Full Text Available Clinical and cognitive symptoms domain-based subtyping in schizophrenia (Sz has been critiqued due to the lack of neurobiological correlates and heterogeneity in symptom scores. We, therefore, present a novel data-driven framework using biclustered independent component analysis to detect subtypes from the reliable and stable gray matter concentration (GMC of patients with Sz. The developed methodology consists of the following steps: source-based morphometry (SBM decomposition, selection and sorting of two component loadings, subtype component reconstruction using group information-guided ICA (GIG-ICA. This framework was applied to the top two group discriminative components namely the insula/superior temporal gyrus/inferior frontal gyrus (I-STG-IFG component and the superior frontal gyrus/middle frontal gyrus/medial frontal gyrus (SFG-MiFG-MFG component from our previous SBM study, which showed diagnostic group difference and had the highest effect sizes. The aggregated multisite dataset consisted of 382 patients with Sz regressed of age, gender, and site voxelwise. We observed two subtypes (i.e., two different subsets of subjects each heavily weighted on these two components, respectively. These subsets of subjects were characterized by significant differences in positive and negative syndrome scale (PANSS positive clinical symptoms (p = 0.005. We also observed an overlapping subtype weighing heavily on both of these components. The PANSS general clinical symptom of this subtype was trend level correlated with the loading coefficients of the SFG-MiFG-MFG component (r = 0.25; p = 0.07. The reconstructed subtype-specific component using GIG-ICA showed variations in voxel regions, when compared to the group component. We observed deviations from mean GMC along with conjunction of features from two components characterizing each deciphered subtype. These inherent variations in GMC among patients with Sz could possibly indicate the

  19. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  20. Animal research in the Journal of Applied Behavior Analysis.

    Science.gov (United States)

    Edwards, Timothy L; Poling, Alan

    2011-01-01

    This review summarizes the 6 studies with nonhuman animal subjects that have appeared in the Journal of Applied Behavior Analysis and offers suggestions for future research in this area. Two of the reviewed articles described translational research in which pigeons were used to illustrate and examine behavioral phenomena of applied significance (say-do correspondence and fluency), 3 described interventions that changed animals' behavior (self-injury by a baboon, feces throwing and spitting by a chimpanzee, and unsafe trailer entry by horses) in ways that benefited the animals and the people in charge of them, and 1 described the use of trained rats that performed a service to humans (land-mine detection). We suggest that each of these general research areas merits further attention and that the Journal of Applied Behavior Analysis is an appropriate outlet for some of these publications.

  1. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  2. A simple component-connection method for building binary decision diagrams encoding a fault tree

    International Nuclear Information System (INIS)

    Way, Y.-S.; Hsia, D.-Y.

    2000-01-01

    A simple new method for building binary decision diagrams (BDDs) encoding a fault tree (FT) is provided in this study. We first decompose the FT into FT-components. Each of them is a single descendant (SD) gate-sequence. Following the node-connection rule, the BDD-component encoding an SD FT-component can each be found to be an SD node-sequence. By successively connecting the BDD-components one by one, the BDD for the entire FT is thus obtained. During the node-connection and component-connection, reduction rules might need to be applied. An example FT is used throughout the article to explain the procedure step by step. Our method proposed is a hybrid one for FT analysis. Some algorithms or techniques used in the conventional FT analysis or the newer BDD approach may be applied to our case; our ideas mentioned in the article might be referred by the two methods

  3. Numerical analysis of magnetoelastic coupled buckling of fusion reactor components

    International Nuclear Information System (INIS)

    Demachi, K.; Yoshida, Y.; Miya, K.

    1994-01-01

    For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated

  4. A further component analysis for illicit drugs mixtures with THz-TDS

    Science.gov (United States)

    Xiong, Wei; Shen, Jingling; He, Ting; Pan, Rui

    2009-07-01

    A new method for quantitative analysis of mixtures of illicit drugs with THz time domain spectroscopy was proposed and verified experimentally. In traditional method we need fingerprints of all the pure chemical components. In practical as only the objective components in a mixture and their absorption features are known, it is necessary and important to present a more practical technique for the detection and identification. Our new method of quantitatively inspect of the mixtures of illicit drugs is developed by using derivative spectrum. In this method, the ratio of objective components in a mixture can be obtained on the assumption that all objective components in the mixture and their absorption features are known but the unknown components are not needed. Then methamphetamine and flour, a illicit drug and a common adulterant, were selected for our experiment. The experimental result verified the effectiveness of the method, which suggested that it could be an effective method for quantitative identification of illicit drugs. This THz spectroscopy technique is great significant in the real-world applications of illicit drugs quantitative analysis. It could be an effective method in the field of security and pharmaceuticals inspection.

  5. Fuzzy cluster quantitative computations of component mass transfer in rocks or minerals

    International Nuclear Information System (INIS)

    Liu Dezheng

    2000-01-01

    The author advances a new component mass transfer quantitative computation method on the basis of closure nature of mass percentage of components in rocks or minerals. Using fuzzy dynamic cluster analysis, and calculating restore closure difference, and determining type of difference, and assisted by relevant diagnostic parameters, the method gradually screens out the true constant component. Then, true mass percentage and mass transfer quantity of components of metabolic rocks or minerals are calculated by applying the true constant component fixed coefficient. This method is called true constant component fixed method (TCF method)

  6. Prefrontal cortex and somatosensory cortex in tactile crossmodal association: an independent component analysis of ERP recordings.

    Directory of Open Access Journals (Sweden)

    Yixuan Ku

    2007-08-01

    Full Text Available Our previous studies on scalp-recorded event-related potentials (ERPs showed that somatosensory N140 evoked by a tactile vibration in working memory tasks was enhanced when human subjects expected a coming visual stimulus that had been paired with the tactile stimulus. The results suggested that such enhancement represented the cortical activities involved in tactile-visual crossmodal association. In the present study, we further hypothesized that the enhancement represented the neural activities in somatosensory and frontal cortices in the crossmodal association. By applying independent component analysis (ICA to the ERP data, we found independent components (ICs located in the medial prefrontal cortex (around the anterior cingulate cortex, ACC and the primary somatosensory cortex (SI. The activity represented by the IC in SI cortex showed enhancement in expectation of the visual stimulus. Such differential activity thus suggested the participation of SI cortex in the task-related crossmodal association. Further, the coherence analysis and the Granger causality spectral analysis of the ICs showed that SI cortex appeared to cooperate with ACC in attention and perception of the tactile stimulus in crossmodal association. The results of our study support with new evidence an important idea in cortical neurophysiology: higher cognitive operations develop from the modality-specific sensory cortices (in the present study, SI cortex that are involved in sensation and perception of various stimuli.

  7. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  8. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  9. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  10. Bayesian statistics applied to neutron activation data for reactor flux spectrum analysis

    International Nuclear Information System (INIS)

    Chiesa, Davide; Previtali, Ezio; Sisti, Monica

    2014-01-01

    Highlights: • Bayesian statistics to analyze the neutron flux spectrum from activation data. • Rigorous statistical approach for accurate evaluation of the neutron flux groups. • Cross section and activation data uncertainties included for the problem solution. • Flexible methodology applied to analyze different nuclear reactor flux spectra. • The results are in good agreement with the MCNP simulations of neutron fluxes. - Abstract: In this paper, we present a statistical method, based on Bayesian statistics, to analyze the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation experiment performed at the TRIGA Mark II reactor of Pavia University (Italy) in four irradiation positions characterized by different neutron spectra. In order to evaluate the neutron flux spectrum, subdivided in energy groups, a system of linear equations, containing the group effective cross sections and the activation rate data, has to be solved. However, since the system’s coefficients are experimental data affected by uncertainties, a rigorous statistical approach is fundamental for an accurate evaluation of the neutron flux groups. For this purpose, we applied the Bayesian statistical analysis, that allows to include the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, was used to define the problem statistical model and solve it. The first analysis involved the determination of the thermal, resonance-intermediate and fast flux components and the dependence of the results on the Prior distribution choice was investigated to confirm the reliability of the Bayesian analysis. After that, the main resonances of the activation cross sections were analyzed to implement multi-group models with finer energy subdivisions that would allow to determine the

  11. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  12. Rehabilitation of the central executive component of working memory: a re-organisation approach applied to a single case.

    Science.gov (United States)

    Duval, J; Coyette, F; Seron, X

    2008-08-01

    This paper describes and evaluates a programme of neuropsychological rehabilitation which aims to improve three sub-components of the working memory central executive: processing load, updating and dual-task monitoring, by the acquisition of three re-organisation strategies (double coding, serial processing and speed reduction). Our programme has two stages: cognitive rehabilitation (graduated exercises subdivided into three sub-programmes each corresponding to a sub-component) which enables the patient to acquire the three specific strategies; and an ecological rehabilitation, including analyses of scenarios and simulations of real-life situations, which aims to transfer the strategies learned to everyday life. The programme also includes information meetings. It was applied to a single case who had working memory deficits after a surgical operation for a cerebral tumour on his left internal temporal ganglioglioma. Multiple baseline tests were used to measure the effectiveness of the rehabilitation. The programme proved to be effective for all three working memory components; a generalisation of its effects to everyday life was observed, and the effects were undiminished three months later.

  13. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    Science.gov (United States)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  14. Macrophage biospecific extraction and HPLC-ESI-MSn analysis for screening immunological active components in Smilacis Glabrae Rhizoma.

    Science.gov (United States)

    Zheng, Zhao-Guang; Duan, Ting-Ting; He, Bao; Tang, Dan; Jia, Xiao-Bin; Wang, Ru-Shang; Zhu, Jia-Xiao; Xu, You-Hua; Zhu, Quan; Feng, Liang

    2013-04-15

    A cell-permeable membrane, as typified by Transwell insert Permeable Supports, permit accurate repeatable invasion assays, has been developed as a tool for screening immunological active components in Smilacis Glabrae Rhizoma (SGR). In this research, components in the water extract of SGR (ESGR) might conjugate with the receptors or other targets on macrophages which invaded Transwell inserts, and then the eluate which contained components biospecific binding to macrophages was identified by HPLC-ESI-MS(n) analysis. Six compounds, which could interact with macrophages, were detected and identified. Among these compounds, taxifolin (2) and astilbin (4) were identified by comparing with the chromatography of standards, while the four others including 5-O-caffeoylshikimic acid (1), neoastilbin (3), neoisoastilbin (5) and isoastilbin (6), were elucidated by their structure clearage characterizations of tandem mass spectrometry. Then compound 1 was isolated and purified from SGR, along with 2 and 4, was applied to the macrophage migration and adhesion assay in HUVEC (Human Umbilical Vein Endothelial Cells) -macrophages co-incultured Transwell system for immunological activity assessment. The results showed that compounds 1, 2 and 4 with concentration of 5μM (H), 500nM (M) and 50nM (L) could remarkably inhibit the macrophage migration and adhesion (Vs AGEs (Advanced Glycation End Produces) group, 1-L, 2-H and 4-L groups: pgroups: pESI-MS(n) analysis is a rapid, simple and reliable method for screening immunological active components from Traditional Chinese Medicine. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  16. Statistical Techniques Applied to Aerial Radiometric Surveys (STAARS): cluster analysis. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Pirkle, F.L.; Stablein, N.K.; Howell, J.A.; Wecksung, G.W.; Duran, B.S.

    1982-11-01

    One objective of the aerial radiometric surveys flown as part of the US Department of Energy's National Uranium Resource Evaluation (NURE) program was to ascertain the regional distribution of near-surface radioelement abundances. Some method for identifying groups of observations with similar radioelement values was therefore required. It is shown in this report that cluster analysis can identify such groups even when no a priori knowledge of the geology of an area exists. A method of convergent k-means cluster analysis coupled with a hierarchical cluster analysis is used to classify 6991 observations (three radiometric variables at each observation location) from the Precambrian rocks of the Copper Mountain, Wyoming, area. Another method, one that combines a principal components analysis with a convergent k-means analysis, is applied to the same data. These two methods are compared with a convergent k-means analysis that utilizes available geologic knowledge. All three methods identify four clusters. Three of the clusters represent background values for the Precambrian rocks of the area, and one represents outliers (anomalously high 214 Bi). A segmentation of the data corresponding to geologic reality as discovered by other methods has been achieved based solely on analysis of aerial radiometric data. The techniques employed are composites of classical clustering methods designed to handle the special problems presented by large data sets. 20 figures, 7 tables

  17. MULTIVARIATE TECHNIQUES APPLIED TO EVALUATION OF LIGNOCELLULOSIC RESIDUES FOR BIOENERGY PRODUCTION

    Directory of Open Access Journals (Sweden)

    Thiago de Paula Protásio

    2013-12-01

    Full Text Available http://dx.doi.org/10.5902/1980509812361The evaluation of lignocellulosic wastes for bioenergy production demands to consider several characteristicsand properties that may be correlated. This fact demands the use of various multivariate analysis techniquesthat allow the evaluation of relevant energetic factors. This work aimed to apply cluster analysis and principalcomponents analyses for the selection and evaluation of lignocellulosic wastes for bioenergy production.8 types of residual biomass were used, whose the elemental components (C, H, O, N, S content, lignin, totalextractives and ashes contents, basic density and higher and lower heating values were determined. Bothmultivariate techniques applied for evaluation and selection of lignocellulosic wastes were efficient andsimilarities were observed between the biomass groups formed by them. Through the interpretation of thefirst principal component obtained, it was possible to create a global development index for the evaluationof the viability of energetic uses of biomass. The interpretation of the second principal component alloweda contrast between nitrogen and sulfur contents with oxygen content.

  18. On an efficient modification of singular value decomposition using independent component analysis for improved MRS denoising and quantification

    International Nuclear Information System (INIS)

    Stamatopoulos, V G; Karras, D A; Mertzios, B G

    2009-01-01

    An efficient modification of singular value decomposition (SVD) is proposed in this paper aiming at denoising and more importantly at quantifying more accurately the statistically independent spectra of metabolite sources in magnetic resonance spectroscopy (MRS). Although SVD is known in MRS applications and several efficient algorithms exist for estimating SVD summation terms in which the raw MRS data are analyzed, however, it would be more beneficial for such an analysis if techniques with the ability to estimate statistically independent spectra could be employed. SVD is known to separate signal and noise subspaces but it assumes orthogonal properties for the components comprising signal subspace, which is not always the case, and might impose heavy constraints for the MRS case. A much more relaxing constraint would be to assume statistically independent components. Therefore, a modification of the main methodology incorporating techniques for calculating the assumed statistically independent spectra is proposed by applying SVD on the MRS spectrogram through application of the short time Fourier transform (STFT). This approach is based on combining SVD on STFT spectrogram followed by an iterative application of independent component analysis (ICA). Moreover, it is shown that the proposed methodology combined with a regression analysis would lead to improved quantification of the MRS signals. An experimental study based on synthetic MRS signals has been conducted to evaluate the herein proposed methodologies. The results obtained have been discussed and it is shown to be quite promising

  19. A method for independent component graph analysis of resting-state fMRI

    DEFF Research Database (Denmark)

    de Paula, Demetrius Ribeiro; Ziegler, Erik; Abeyasinghe, Pubuditha M.

    2017-01-01

    Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguou......Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non......-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Objective Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. Methods First, ICA was performed at the single-subject level in 15 healthy...... parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Results Network graph comparison between the classically constructed network...

  20. Applied Behavior Analysis: Beyond Discrete Trial Teaching

    Science.gov (United States)

    Steege, Mark W.; Mace, F. Charles; Perry, Lora; Longenecker, Harold

    2007-01-01

    We discuss the problem of autism-specific special education programs representing themselves as Applied Behavior Analysis (ABA) programs when the only ABA intervention employed is Discrete Trial Teaching (DTT), and often for limited portions of the school day. Although DTT has many advantages to recommend its use, it is not well suited to teach…

  1. Summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.

    2004-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data

  2. DATE analysis: A general theory of biological change applied to microarray data.

    Science.gov (United States)

    Rasnick, David

    2009-01-01

    In contrast to conventional data mining, which searches for specific subsets of genes (extensive variables) to correlate with specific phenotypes, DATE analysis correlates intensive state variables calculated from the same datasets. At the heart of DATE analysis are two biological equations of state not dependent on genetic pathways. This result distinguishes DATE analysis from other bioinformatics approaches. The dimensionless state variable F quantifies the relative overall cellular activity of test cells compared to well-chosen reference cells. The variable pi(i) is the fold-change in the expression of the ith gene of test cells relative to reference. It is the fraction phi of the genome undergoing differential expression-not the magnitude pi-that controls biological change. The state variable phi is equivalent to the control strength of metabolic control analysis. For tractability, DATE analysis assumes a linear system of enzyme-connected networks and exploits the small average contribution of each cellular component. This approach was validated by reproducible values of the state variables F, RNA index, and phi calculated from random subsets of transcript microarray data. Using published microarray data, F, RNA index, and phi were correlated with: (1) the blood-feeding cycle of the malaria parasite, (2) embryonic development of the fruit fly, (3) temperature adaptation of Killifish, (4) exponential growth of cultured S. pneumoniae, and (5) human cancers. DATE analysis was applied to aCGH data from the great apes. A good example of the power of DATE analysis is its application to genomically unstable cancers, which have been refractory to data mining strategies. 2009 American Institute of Chemical Engineers Biotechnol.

  3. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  4. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  5. [CoCuMnOx Photocatalyzed Oxidation of Multi-component VOCs and Kinetic Analysis].

    Science.gov (United States)

    Meng, Hai-long; Bo, Long-li; Liu, Jia-dong; Gao, Bo; Feng, Qi-qi; Tan, Na; Xie, Shuai

    2016-05-15

    Solar energy absorption coating CoCuMnOx was prepared by co-precipitation method and applied to photodegrade multi- component VOCs including toluene, ethyl acetate and acetone under visible light irradiation. The photocatalytic oxidation performance of toluene, ethyl acetate and acetone was analyzed and reaction kinetics of VOCs were investigated synchronously. The research indicated that removal rates of single-component toluene, ethyl acetate and acetone were 57%, 62% and 58% respectively under conditions of 400 mg · m⁻³ initial concentration, 120 mm illumination distance, 1 g/350 cm² dosage of CoCuMnOx and 6 h of irradiation time by 100 W tungsten halogen lamp. Due to the competition among different VOCs, removal efficiencies in three-component mixture were reduced by 5%-26% as compared with single VOC. Degradation processes of single-component VOC and three-component VOCs both fitted pseudo first order reaction kinetics, and kinetic constants of toluene, ethyl acetate and acetone were 0.002, 0.002 8 and 0.002 33 min⁻¹ respectively under single-component condition. Reaction rates of VOCs in three-component mixture were 0.49-0.88 times of single components.

  6. Positive Behavior Support and Applied Behavior Analysis

    Science.gov (United States)

    Johnston, J. M.; Foxx, R. M.; Jacobson, J. W.; Green, G.; Mulick, J. A.

    2006-01-01

    This article reviews the origins and characteristics of the positive behavior support (PBS) movement and examines those features in the context of the field of applied behavior analysis (ABA). We raise a number of concerns about PBS as an approach to delivery of behavioral services and its impact on how ABA is viewed by those in human services. We…

  7. Progressive-Ratio Schedules and Applied Behavior Analysis

    Science.gov (United States)

    Poling, Alan

    2010-01-01

    Establishing appropriate relations between the basic and applied areas of behavior analysis has been of long and persistent interest to the author. In this article, the author illustrates that there is a direct relation between how hard an organism will work for access to an object or activity, as indexed by the largest ratio completed under a…

  8. Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis.

    Science.gov (United States)

    Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani

    2014-02-15

    Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    Science.gov (United States)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  10. Radionuclide X-ray fluorescence analysis of components of the environment

    International Nuclear Information System (INIS)

    Toelgyessy, J.; Havranek, E.; Dejmkova, E.

    1983-12-01

    The physical foundations and methodology are described of radionuclide X-ray fluorescence analysis. The sources are listed of air, water and soil pollution, and the transfer of impurities into biological materials is described. A detailed description is presented of the sampling of air, soil and biological materials and their preparation for analysis. Greatest attention is devoted to radionuclide X-ray fluorescence analysis of the components of the environment. (ES)

  11. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  12. Shaking table test and dynamic response analysis of 3-D component base isolation system using multi-layer rubber bearings and coil springs

    Energy Technology Data Exchange (ETDEWEB)

    Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Fujimoto, Shigeru [Toshiba Corp., Tokyo (Japan)

    2001-06-01

    Introduction of the base isolation technique into the seismic design of nuclear power plant components as well as buildings has been expected as one of the effective countermeasure to reduce the seismic force applied to components. A research program on the base isolation of nuclear components has been carried out at the Japan Atomic Energy Research Institute (JAERI) since 1991. A methodology and a computer code (EBISA: Equipment Base Isolation System Analysis) for evaluating the failure frequency of the nuclear component with the base isolation were developed. In addition, a test program, which is concerned with the above development, aiming at improvement of failure frequency analysis models in the code has been conducted since 1996 to investigate the dynamic behavior and to verify the effectiveness of component base isolation systems. Two base isolation test systems with different characteristics were fabricated and static and dynamic characteristics were measured by static loading and free vibration tests. One which consists of ball bearings and air springs was installed on the test bed to observe the dynamic response under natural earthquake motion. The effect of base isolation system has been observed under several earthquakes. Three-dimensional response and effect of base isolation of another system using multi-layer-rubber-bearings and coil springs has been investigated under various large earthquake motions by shaking table test. This report describes the results of the shaking table tests and dynamic response analysis. (author)

  13. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  14. Applying multi-criteria analysis to radiation protection optimisation of low and intermediate level radioactive waste disposal

    International Nuclear Information System (INIS)

    Pages, P.; Schneider, T.; Lombard, J.

    1991-01-01

    Introduction of ALARA principles in the field of radioactive waste management implies a definition of the main characteristics of the decisional framework. Specific aspects should be taken into account: long term effects, large uncertainties and/or probabilistic events, with particular attention to the public and the political authorities. Traditional cost-benefit analysis is not qualified to deal with these different dimensions of the risk. The aim of this paper is to describe the principles of multi-criteria analysis applied to low and intermediate level radioactive waste disposal. Three categories of barriers can be distinguished acting at different protection levels: site characteristics, waste package and disposal system. A set of possible solutions can be identified, but the selection of the 'optimum' is not easy because of the diversity of the factors to be allowed for. For example, the following problem needs to be addressed: is it preferable to limit public radiation exposure several hundred years ahead or to reduce occupational exposure during the monitoring period of the disposal facility? An optimisation study is currently being performed on the various components of the structure, assuming given site and waste package characteristics. Four steps are distinguished: identification and analysis of options for the structure; selection and estimation of the qualitative and quantitative criteria; determination of the 'most interesting' solutions using multi-criteria analysis; sensitivity analysis and discussion on uncertainties related to the various assumptions. Based on the preliminary findings, the paper focuses on practical solutions to address the methodological issues raised in applying the optimisation procedures to radioactive waste management. (au)

  15. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  16. Strategic decision analysis applied to borehole seismology

    International Nuclear Information System (INIS)

    Menke, M.M.; Paulsson, B.N.P.

    1994-01-01

    Strategic Decision Analysis (SDA) is the evolving body of knowledge on how to achieve high quality in the decision that shapes an organization's future. SDA comprises philosophy, process concepts, methodology, and tools for making good decisions. It specifically incorporates many concepts and tools from economic evaluation and risk analysis. Chevron Petroleum Technology Company (CPTC) has applied SDA to evaluate and prioritize a number of its most important and most uncertain R and D projects, including borehole seismology. Before SDA, there were significant issues and concerns about the value to CPTC of continuing to work on borehole seismology. The SDA process created a cross-functional team of experts to structure and evaluate this project. A credible economic model was developed, discrete risks and continuous uncertainties were assessed, and an extensive sensitivity analysis was performed. The results, even applied to a very restricted drilling program for a few years, were good enough to demonstrate the value of continuing the project. This paper explains the SDA philosophy concepts, and process and demonstrates the methodology and tools using the borehole seismology project example. SDA is useful in the upstream industry not just in the R and D/technology decisions, but also in major exploration and production decisions. Since a major challenge for upstream companies today is to create and realize value, the SDA approach should have a very broad applicability

  17. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  18. Ecological Safety Evaluation of Land Use in Ji’an City Based on the Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    According to the ecological safety evaluation index data of land-use change in Ji’an City from 1999 to 2008,positive treatment on selected reverse indices is conducted by Reciprocal Method.Meanwhile,Index Method is used to standardize the selected indices,and Principal Component Analysis is applied by using year as a unit.FB is obtained,which is related with the ecological safety of land-use change from 1999 to 2008.According to the scientific,integrative,hierarchical,practical and dynamic principles,ecological safety evaluation index system of land-use change in Ji’an City is established.Principal Component Analysis and evaluation model are used to calculate four parameters,including the natural resources safety index of land use,the socio-economic safety indicators of land use,the eco-environmental safety index of land use,and the ecological safety degree of land use in Ji’an City.Result indicates that the ecological safety degree of land use in Ji’an City shows a slow upward trend as a whole.At the same time,ecological safety degree of land-use change is relatively low in Ji’an City with the safety value of 0.645,which is at a weak safety zone and needs further monitoring and maintenance.

  19. Sensitivity analysis on the component cooling system of the Angra 1 NPP

    International Nuclear Information System (INIS)

    Castro Silva, Luiz Euripedes Massiere de

    1995-01-01

    The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs

  20. A practical guide to propensity score analysis for applied clinical research.

    Science.gov (United States)

    Lee, Jaehoon; Little, Todd D

    2017-11-01

    Observational studies are often the only viable options in many clinical settings, especially when it is unethical or infeasible to randomly assign participants to different treatment régimes. In such case propensity score (PS) analysis can be applied to accounting for possible selection bias and thereby addressing questions of causal inference. Many PS methods exist, yet few guidelines are available to aid applied researchers in their conduct and evaluation of a PS analysis. In this article we give an overview of available techniques for PS estimation and application, balance diagnostic, treatment effect estimation, and sensitivity assessment, as well as recent advances. We also offer a tutorial that can be used to emulate the steps of PS analysis. Our goal is to provide information that will bring PS analysis within the reach of applied clinical researchers and practitioners. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Liying Wang

    2017-01-01

    Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.

  2. Electronic components

    CERN Document Server

    Colwell, Morris A

    1976-01-01

    Electronic Components provides a basic grounding in the practical aspects of using and selecting electronics components. The book describes the basic requirements needed to start practical work on electronic equipment, resistors and potentiometers, capacitance, and inductors and transformers. The text discusses semiconductor devices such as diodes, thyristors and triacs, transistors and heat sinks, logic and linear integrated circuits (I.C.s) and electromechanical devices. Common abbreviations applied to components are provided. Constructors and electronics engineers will find the book useful

  3. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  4. An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis

    Science.gov (United States)

    2013-03-01

    AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks

  5. Linearization of the Principal Component Analysis method for radiative transfer acceleration: Application to retrieval algorithms and sensitivity studies

    International Nuclear Information System (INIS)

    Spurr, R.; Natraj, V.; Lerot, C.; Van Roozendael, M.; Loyola, D.

    2013-01-01

    Principal Component Analysis (PCA) is a promising tool for enhancing radiative transfer (RT) performance. When applied to binned optical property data sets, PCA exploits redundancy in the optical data, and restricts the number of full multiple-scatter calculations to those optical states corresponding to the most important principal components, yet still maintaining high accuracy in the radiance approximations. We show that the entire PCA RT enhancement process is analytically differentiable with respect to any atmospheric or surface parameter, thus allowing for accurate and fast approximations of Jacobian matrices, in addition to radiances. This linearization greatly extends the power and scope of the PCA method to many remote sensing retrieval applications and sensitivity studies. In the first example, we examine accuracy for PCA-derived UV-backscatter radiance and Jacobian fields over a 290–340 nm window. In a second application, we show that performance for UV-based total ozone column retrieval is considerably improved without compromising the accuracy. -- Highlights: •Principal Component Analysis (PCA) of spectrally-binned atmospheric optical properties. •PCA-based accelerated radiative transfer with 2-stream model for fast multiple-scatter. •Atmospheric and surface property linearization of this PCA performance enhancement. •Accuracy of PCA enhancement for radiances and bulk-property Jacobians, 290–340 nm. •Application of PCA speed enhancement to UV backscatter total ozone retrievals

  6. Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.

    Science.gov (United States)

    Zhang, Sheng; Li, Chiang-Shan R

    2017-11-01

    As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.

  7. Applied spectrophotometry: analysis of a biochemical mixture.

    Science.gov (United States)

    Trumbo, Toni A; Schultz, Emeric; Borland, Michael G; Pugh, Michael Eugene

    2013-01-01

    Spectrophotometric analysis is essential for determining biomolecule concentration of a solution and is employed ubiquitously in biochemistry and molecular biology. The application of the Beer-Lambert-Bouguer Lawis routinely used to determine the concentration of DNA, RNA or protein. There is however a significant difference in determining the concentration of a given species (RNA, DNA, protein) in isolation (a contrived circumstance) as opposed to determining that concentration in the presence of other species (a more realistic situation). To present the student with a more realistic laboratory experience and also to fill a hole that we believe exists in student experience prior to reaching a biochemistry course, we have devised a three week laboratory experience designed so that students learn to: connect laboratory practice with theory, apply the Beer-Lambert-Bougert Law to biochemical analyses, demonstrate the utility and limitations of example quantitative colorimetric assays, demonstrate the utility and limitations of UV analyses for biomolecules, develop strategies for analysis of a solution of unknown biomolecular composition, use digital micropipettors to make accurate and precise measurements, and apply graphing software. Copyright © 2013 Wiley Periodicals, Inc.

  8. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  9. Thermal Analysis of the Driving Component Based on the Thermal Network Method in a Lunar Drilling System and Experimental Verification

    Directory of Open Access Journals (Sweden)

    Dewei Tang

    2017-03-01

    Full Text Available The main task of the third Chinese lunar exploration project is to obtain soil samples that are greater than two meters in length and to acquire bedding information from the surface of the moon. The driving component is the power output unit of the drilling system in the lander; it provides drilling power for core drilling tools. High temperatures can cause the sensors, permanent magnet, gears, and bearings to suffer irreversible damage. In this paper, a thermal analysis model for this driving component, based on the thermal network method (TNM was established and the model was solved using the quasi-Newton method. A vacuum test platform was built and an experimental verification method (EVM was applied to measure the surface temperature of the driving component. Then, the TNM was optimized, based on the principle of heat distribution. Through comparative analyses, the reasonableness of the TNM is validated. Finally, the static temperature field of the driving component was predicted and the “safe working times” of every mode are given.

  10. Applied Drama and the Higher Education Learning Spaces: A Reflective Analysis

    Science.gov (United States)

    Moyo, Cletus

    2015-01-01

    This paper explores Applied Drama as a teaching approach in Higher Education learning spaces. The exploration takes a reflective analysis approach by first examining the impact that Applied Drama has had on my career as a Lecturer/Educator/Teacher working in Higher Education environments. My engagement with Applied Drama practice and theory is…

  11. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    International Nuclear Information System (INIS)

    Na, Man Gyun; Oh, Seungrohk

    2002-01-01

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

  12. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  13. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  14. Graphical Methodology of Global Pollution Index for the Environmental Impact Assessment Using Two Environmental Components

    OpenAIRE

    Corneliu Cojocaru; Diana Mariana Cocârţă; Irina Aura Istrate; Igor Creţescu

    2017-01-01

    One of the applied methods for environmental impact assessment is the index of global pollution (IGP) proposed by Rojanschi in 1991. This methodology enables the global estimation for the ecosystem state affected more or less by human activities. Unfortunately, Rojanschi’s method has a limitation; it can be applied only if at least three environmental components are considered. Frequently, many environmental impact assessment applications rely on analysis of only two environmental components....

  15. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  16. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    International Nuclear Information System (INIS)

    Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.

    2012-01-01

    We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  17. Functional Data Analysis Applied in Chemometrics

    DEFF Research Database (Denmark)

    Muller, Martha

    nutritional status and metabolic phenotype. We want to understand how metabolomic spectra can be analysed using functional data analysis to detect the in uence of dierent factors on specic metabolites. These factors can include, for example, gender, diet culture or dietary intervention. In Paper I we apply...... representation of each spectrum. Subset selection of wavelet coecients generates the input to mixed models. Mixed-model methodology enables us to take the study design into account while modelling covariates. Bootstrap-based inference preserves the correlation structure between curves and enables the estimation...

  18. Enantiomer-specific analysis of multi-component mixtures by correlated electron imaging-ion mass spectrometry

    NARCIS (Netherlands)

    Rafiee Fanood, M.M.; Ram, N.B.; Lehmann, C.S.; Powis, I.; Janssen, M.H.M.

    2015-01-01

    Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how

  19. Identification of components of fibroadenoma in cytology preparations using texture analysis: a morphometric study.

    Science.gov (United States)

    Singh, S; Gupta, R

    2012-06-01

    To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.

  20. The Cost Analysis of Corrosion Protection Solutions for Steel Components in Terms of the Object Life Cycle Cost

    Directory of Open Access Journals (Sweden)

    Kowalski Dariusz

    2017-09-01

    Full Text Available Steel materials, due to their numerous advantages - high availability, easiness of processing and possibility of almost any shaping are commonly applied in construction for carrying out basic carrier systems and auxiliary structures. However, the major disadvantage of this material is its high corrosion susceptibility, which depends strictly on the local conditions of the facility and the applied type of corrosion protection system. The paper presents an analysis of life cycle costs of structures installed on bridges used in the road lane conditions. Three anti-corrosion protection systems were considered, analyzing their essential cost components. The possibility of reducing significantly the costs associated with anti-corrosion protection at the stage of steel barriers maintenance over a period of 30 years has been indicated. The possibility of using a new approach based on the life cycle cost estimation in the anti-corrosion protection of steel elements is presented. The relationship between the method of steel barrier protection, the scope of repair, renewal work and costs is shown. The article proposes an optimal solution which, while reducing the cost of maintenance of road infrastructure components in the area of corrosion protection, allows to maintain certain safety standards for steel barriers that are installed on the bridge.

  1. The Cost Analysis of Corrosion Protection Solutions for Steel Components in Terms of the Object Life Cycle Cost

    Science.gov (United States)

    Kowalski, Dariusz; Grzyl, Beata; Kristowski, Adam

    2017-09-01

    Steel materials, due to their numerous advantages - high availability, easiness of processing and possibility of almost any shaping are commonly applied in construction for carrying out basic carrier systems and auxiliary structures. However, the major disadvantage of this material is its high corrosion susceptibility, which depends strictly on the local conditions of the facility and the applied type of corrosion protection system. The paper presents an analysis of life cycle costs of structures installed on bridges used in the road lane conditions. Three anti-corrosion protection systems were considered, analyzing their essential cost components. The possibility of reducing significantly the costs associated with anti-corrosion protection at the stage of steel barriers maintenance over a period of 30 years has been indicated. The possibility of using a new approach based on the life cycle cost estimation in the anti-corrosion protection of steel elements is presented. The relationship between the method of steel barrier protection, the scope of repair, renewal work and costs is shown. The article proposes an optimal solution which, while reducing the cost of maintenance of road infrastructure components in the area of corrosion protection, allows to maintain certain safety standards for steel barriers that are installed on the bridge.

  2. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  3. Characterization and Discrimination of Gram-Positive Bacteria Using Raman Spectroscopy with the Aid of Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Alia Colniță

    2017-09-01

    Full Text Available Raman scattering and its particular effect, surface-enhanced Raman scattering (SERS, are whole-organism fingerprinting spectroscopic techniques that gain more and more popularity in bacterial detection. In this work, two relevant Gram-positive bacteria species, Lactobacillus casei (L. casei and Listeria monocytogenes (L. monocytogenes were characterized based on their Raman and SERS spectral fingerprints. The SERS spectra were used to identify the biochemical structures of the bacterial cell wall. Two synthesis methods of the SERS-active nanomaterials were used and the recorded spectra were analyzed. L. casei and L. monocytogenes were successfully discriminated by applying Principal Component Analysis (PCA to their specific spectral data.

  4. Component analysis and initial validity of the exercise fear avoidance scale.

    Science.gov (United States)

    Wingo, Brooks C; Baskin, Monica; Ard, Jamy D; Evans, Retta; Roy, Jane; Vogtle, Laura; Grimley, Diane; Snyder, Scott

    2013-01-01

    To develop the Exercise Fear Avoidance Scale (EFAS) to measure fear of exercise-induced discomfort. We conducted principal component analysis to determine component structure and Cronbach's alpha to assess internal consistency of the EFAS. Relationships between EFAS scores, BMI, physical activity, and pain were analyzed using multivariate regression. The best fit was a 3-component structure: weight-specific fears, cardiorespiratory fears, and musculoskeletal fears. Cronbach's alpha for the EFAS was α=.86. EFAS scores significantly predicted BMI, physical activity, and PDI scores. Psychometric properties of this scale suggest it may be useful for tailoring exercise prescriptions to address fear of exercise-related discomfort.

  5. Applied Behavior Analysis: Current Myths in Public Education

    Science.gov (United States)

    Fielding, Cheryl; Lowdermilk, John; Lanier, Lauren L.; Fannin, Abigail G.; Schkade, Jennifer L.; Rose, Chad A.; Simpson, Cynthia G.

    2013-01-01

    The effective use of behavior management strategies and related policies continues to be a debated issue in public education. Despite overwhelming evidence espousing the benefits of the implementation of procedures derived from principles based on the science of applied behavior analysis (ABA), educators often indicate many common misconceptions…

  6. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  7. Effect of applied voltage on phase components of composite coatings prepared by micro-arc oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Wenjun [Department of Prosthodontics, Guanghua School of Stomatology, Sun Yat-sen University, Guangzhou 510055 (China); Fang, Yu-Jing [Department of Colorectal Surgery, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou 510060 (China); Zheng, Huade [College of Materials Science and Engineering, South China University of Technology, Guangzhou 510641 (China); Tan, Guoxin [Guangdong University of Technology, Guangdong Province 510006 (China); Cheng, Haimei [College of Materials Science and Engineering, South China University of Technology, Guangzhou 510641 (China); Ning, Chengyun, E-mail: imcyning@scut.edu.cn [College of Materials Science and Engineering, South China University of Technology, Guangzhou 510641 (China)

    2013-10-01

    In this report, we present results from our experiments on composite coatings formed on biomedical titanium substrates by micro-arc oxidation (MAO) in constant-voltage mode. The coatings were prepared on the substrates in an aqueous electrolyte containing calcium acetate and β-glycerol phosphate disodium salt pentahydrate (β-GP). We analyzed the element distribution and phase components of the coatings prepared at different voltages by X-ray diffraction, thin-coating X-ray diffraction, electron-probe microanalysis, and Fourier-transform infrared spectroscopy. The results show that the composite coatings formed at 500 V consist of titania (TiO{sub 2}), hydroxylapatite (HA), and calcium carbonate (CaCO{sub 3}). Furthermore, the concentration of Ca, P, and Ti gradually changes with increasing applied voltage, and the phase components of the composite coatings gradually change from the bottom of the coating to the top: the bottom layer consists of TiO{sub 2}, the middle layer consists of TiO{sub 2} and HA, and the top layer consists of HA and a small amount of CaCO{sub 3}. The formation of HA directly on the coating surface by MAO technique can greatly enhance the surface bioactivity. - Highlights: • Coatings prepared on biomedical titanium substrate by micro-arc oxidation • Coatings composed of titania, hydroxyapatite and calcium carbonate • Hydroxyapatite on the coating surface can enhance the surface bioactivity.

  8. Stabilizing bidirectional associative memory with Principles in Independent Component Analysis and Null Space (PICANS)

    Science.gov (United States)

    LaRue, James P.; Luzanov, Yuriy

    2013-05-01

    A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer association while the second application is modeled after the several hierarchal associations of ventral pathways. The first application will detail the way in which we manage the associations in terms of matrices. The second application will take what we have learned from the first example and apply it to a cascade of a convolutional neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual systems.

  9. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    Directory of Open Access Journals (Sweden)

    Zhengchun Du

    2016-05-01

    Full Text Available The use of three-dimensional (3D data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS. First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  10. Development of the Inspection and Diagnosis Technology for the NSSS Components Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Hee; Eom, Heung Soup; Lee, Jae Cheol and others

    2005-02-15

    This project aims at the development of new technologies for a monitoring, inspection, diagnosis and evaluation of the safety related components in nuclear power plants. These technologies are required to detect the defects in the components of nuclear power plants and to prepare thoroughly against accidents. We performed the 1st stage of the study on the four issues recently focused. Thus we developed an analysis model of dynamic characteristics on the reactor internals, an on-line monitoring technology using an ultrasonic guided wave, a network based remote inspection system and an inspection robot for a control rod guide tube support pin. We also performed a lifetime estimation and degradation analysis of the NPP cables through accelerated degradation tests. The technologies developed in this project are applied to the components of nuclear power plants. The applications include a localization of the NSSS integrity monitoring system, replacement of an in-service inspection by on-line monitoring, remote inspection of the major components of the plants, lifetime estimation of the degraded plant cables, and so on. Elemental technologies obtained through the project can have great ripple effects in general industry, and can be applied to the inspection and diagnosis of the components in the other industries.

  11. Development of the Inspection and Diagnosis Technology for the NSSS Components Integrity

    International Nuclear Information System (INIS)

    Kim, Jae Hee; Eom, Heung Soup; Lee, Jae Cheol and others

    2005-02-01

    This project aims at the development of new technologies for a monitoring, inspection, diagnosis and evaluation of the safety related components in nuclear power plants. These technologies are required to detect the defects in the components of nuclear power plants and to prepare thoroughly against accidents. We performed the 1st stage of the study on the four issues recently focused. Thus we developed an analysis model of dynamic characteristics on the reactor internals, an on-line monitoring technology using an ultrasonic guided wave, a network based remote inspection system and an inspection robot for a control rod guide tube support pin. We also performed a lifetime estimation and degradation analysis of the NPP cables through accelerated degradation tests. The technologies developed in this project are applied to the components of nuclear power plants. The applications include a localization of the NSSS integrity monitoring system, replacement of an in-service inspection by on-line monitoring, remote inspection of the major components of the plants, lifetime estimation of the degraded plant cables, and so on. Elemental technologies obtained through the project can have great ripple effects in general industry, and can be applied to the inspection and diagnosis of the components in the other industries

  12. A multi-dimensional functional principal components analysis of EEG data.

    Science.gov (United States)

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  13. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  14. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  15. Structural analysis of NPP components and structures

    International Nuclear Information System (INIS)

    Saarenheimo, A.; Keinaenen, H.; Talja, H.

    1998-01-01

    Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)

  16. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    Science.gov (United States)

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  17. Structural reliability analysis applied to pipeline risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)

  18. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  19. Principal component analysis of FDG PET in amnestic MCI

    International Nuclear Information System (INIS)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco

    2008-01-01

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  20. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  1. Time-Frequency Data Reduction for Event Related Potentials: Combining Principal Component Analysis and Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Selin Aviyente

    2010-01-01

    Full Text Available Joint time-frequency representations offer a rich representation of event related potentials (ERPs that cannot be obtained through individual time or frequency domain analysis. This representation, however, comes at the expense of increased data volume and the difficulty of interpreting the resulting representations. Therefore, methods that can reduce the large amount of time-frequency data to experimentally relevant components are essential. In this paper, we present a method that reduces the large volume of ERP time-frequency data into a few significant time-frequency parameters. The proposed method is based on applying the widely used matching pursuit (MP approach, with a Gabor dictionary, to principal components extracted from the time-frequency domain. The proposed PCA-Gabor decomposition is compared with other time-frequency data reduction methods such as the time-frequency PCA approach alone and standard matching pursuit methods using a Gabor dictionary for both simulated and biological data. The results show that the proposed PCA-Gabor approach performs better than either the PCA alone or the standard MP data reduction methods, by using the smallest amount of ERP data variance to produce the strongest statistical separation between experimental conditions.

  2. Fluorescence lifetime selectivity in excitation-emission matrices for qualitative analysis of a two-component system

    International Nuclear Information System (INIS)

    Millican, D.W.; McGown, L.B.

    1989-01-01

    Steady-state fluorescence excitation-emission matrices (EEMs), and phase-resolved EEMs (PREEMs) collected at modulation frequencies of 6, 18, and 30 MHz, were used for qualitative analysis of mixtures of benzo[k]fluoranthene (τ = 8 ns) and benzo[b]fluoranthene (τ = 29 ns) in ethanol. The EEMs of the individual components were extracted from mixture EEMs by means of wavelength component vector-gram (WCV) analysis. Phase resolution was found to be superior to steady-state measurements for extraction of the component spectra, for mixtures in which the intensity contributions from the two components are unequal

  3. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  4. The analysis of multivariate group differences using common principal components

    NARCIS (Netherlands)

    Bechger, T.M.; Blanca, M.J.; Maris, G.

    2014-01-01

    Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences

  5. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  6. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    Science.gov (United States)

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  7. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  8. The derivative assay--an analysis of two fast components of DNA rejoining kinetics

    International Nuclear Information System (INIS)

    Sandstroem, B.E.

    1989-01-01

    The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively

  9. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    Science.gov (United States)

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  10. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  11. A hybrid sales forecasting scheme by combining independent component analysis with K-means clustering and support vector regression.

    Science.gov (United States)

    Lu, Chi-Jie; Chang, Chi-Chang

    2014-01-01

    Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.

  12. Detection of gear cracks in a complex gearbox of wind turbines using supervised bounded component analysis of vibration signals collected from multi-channel sensors

    Science.gov (United States)

    Li, Zhixiong; Yan, Xinping; Wang, Xuping; Peng, Zhongxiao

    2016-06-01

    In the complex gear transmission systems, in wind turbines a crack is one of the most common failure modes and can be fatal to the wind turbine power systems. A single sensor may suffer with issues relating to its installation position and direction, resulting in the collection of weak dynamic responses of the cracked gear. A multi-channel sensor system is hence applied in the signal acquisition and the blind source separation (BSS) technologies are employed to optimally process the information collected from multiple sensors. However, literature review finds that most of the BSS based fault detectors did not address the dependence/correlation between different moving components in the gear systems; particularly, the popular used independent component analysis (ICA) assumes mutual independence of different vibration sources. The fault detection performance may be significantly influenced by the dependence/correlation between vibration sources. In order to address this issue, this paper presents a new method based on the supervised order tracking bounded component analysis (SOTBCA) for gear crack detection in wind turbines. The bounded component analysis (BCA) is a state of art technology for dependent source separation and is applied limitedly to communication signals. To make it applicable for vibration analysis, in this work, the order tracking has been appropriately incorporated into the BCA framework to eliminate the noise and disturbance signal components. Then an autoregressive (AR) model built with prior knowledge about the crack fault is employed to supervise the reconstruction of the crack vibration source signature. The SOTBCA only outputs one source signal that has the closest distance with the AR model. Owing to the dependence tolerance ability of the BCA framework, interfering vibration sources that are dependent/correlated with the crack vibration source could be recognized by the SOTBCA, and hence, only useful fault information could be preserved in

  13. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  14. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

    OpenAIRE

    Geroukis, Asterios; Brorson, Erik

    2014-01-01

    In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

  15. Group-wise ANOVA simultaneous component analysis for designed omics experiments

    NARCIS (Netherlands)

    Saccenti, Edoardo; Smilde, Age K.; Camacho, José

    2018-01-01

    Introduction: Modern omics experiments pertain not only to the measurement of many variables but also follow complex experimental designs where many factors are manipulated at the same time. This data can be conveniently analyzed using multivariate tools like ANOVA-simultaneous component analysis

  16. Applied behavior analysis: understanding and changing behavior in the community-a representative review.

    Science.gov (United States)

    Luyben, Paul D

    2009-01-01

    Applied behavior analysis, a psychological discipline, has been characterized as the science of behavior change (Chance, 2006). Research in applied behavior analysis has been published for approximately 40 years since the initial publication of the Journal of Applied Behavior Analysis in 1968. The field now encompasses a wide range of human behavior. Although much of the published research centers on problem behaviors that occur in schools and among people with disabilities, a substantial body of knowledge has emerged in community settings. This article provides a review of the behavioral community research published in the Journal of Applied Behavior Analysis as representative of this work, including research in the areas of home and family, health, safety, community involvement and the environment, recreation and sports, crime and delinquency, and organizations. In the interest of space, research in schools and with people with disabilities has been excluded from this review.

  17. Probabilistic methods in nuclear power plant component ageing analysis

    International Nuclear Information System (INIS)

    Simola, K.

    1992-03-01

    The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience

  18. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  19. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  20. B. F. Skinner's Contributions to Applied Behavior Analysis

    Science.gov (United States)

    Morris, Edward K.; Smith, Nathaniel G.; Altus, Deborah E.

    2005-01-01

    Our paper reviews and analyzes B. F. Skinner's contributions to applied behavior analysis in order to assess his role as the field's originator and founder. We found, first, that his contributions fall into five categorizes: the style and content of his science, his interpretations of typical and atypical human behavior, the implications he drew…

  1. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  2. Principal-component analysis of two-particle azimuthal correlations in PbPb and pPb collisions at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Sirunyan, Albert M; et al.

    2017-08-23

    For the first time a principle-component analysis is used to separate out different orthogonal modes of the two-particle correlation matrix from heavy ion collisions. The analysis uses data from sqrt(s[NN]) = 2.76 TeV PbPb and sqrt(s[NN]) = 5.02 TeV pPb collisions collected by the CMS experiment at the LHC. Two-particle azimuthal correlations have been extensively used to study hydrodynamic flow in heavy ion collisions. Recently it has been shown that the expected factorization of two-particle results into a product of the constituent single-particle anisotropies is broken. The new information provided by these modes may shed light on the breakdown of flow factorization in heavy ion collisions. The first two modes ("leading" and "subleading") of two-particle correlations are presented for elliptical and triangular anisotropies in PbPb and pPb collisions as a function of pt over a wide range of event activity. The leading mode is found to be essentially equivalent to the anisotropy harmonic previously extracted from two-particle correlation methods. The subleading mode represents a new experimental observable and is shown to account for a large fraction of the factorization breaking recently observed at high transverse momentum. The principle-component analysis technique has also been applied to multiplicity fluctuations. These also show a subleading mode. The connection of these new results to previous studies of factorization is discussed.

  3. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  4. BAYESIAN SEMI-BLIND COMPONENT SEPARATION FOR FOREGROUND REMOVAL IN INTERFEROMETRIC 21 cm OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Le; Timbie, Peter T. [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Bunn, Emory F. [Physics Department, University of Richmond, Richmond, VA 23173 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, P. M. [Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Wandelt, Benjamin D., E-mail: lzhang263@wisc.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 W Green Street, Urbana, IL 61801 (United States)

    2016-01-15

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.

  5. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  6. Reformulating Component Identification as Document Analysis Problem

    NARCIS (Netherlands)

    Gross, H.G.; Lormans, M.; Zhou, J.

    2007-01-01

    One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can

  7. Thermogravimetric analysis of combustible waste components

    DEFF Research Database (Denmark)

    Munther, Anette; Wu, Hao; Glarborg, Peter

    In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...

  8. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....

  9. Identifying effective components of child maltreatment interventions: A meta-analysis

    NARCIS (Netherlands)

    van der Put, C.E.; Assink, M.; Gubbels, J.; Boekhout van Solinge, N.F.

    There is a lack of knowledge about specific components that make interventions effective in preventing or reducing child maltreatment. The aim of the present meta-analysis was to increase this knowledge by summarizing findings on effects of interventions for child maltreatment and by examining

  10. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  11. Interpretation of risk significance of passive component aging using probabilistic structural analysis

    International Nuclear Information System (INIS)

    Phillips, J.H.; Atwood, C.L.

    1993-01-01

    The probabilistic risk assessments (PRAs) being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. Except as initiating events, the possible failure of passive components is given little consideration. The NRC is sponsoring a project at INEL to investigate the risk significance of passive components as they age. For this project, we developed a technique to calculate the failure probability of passive components over time, and demonstrated the technique by applying it to a weld in the auxiliary feedwater (AFW) system. A decreasing yearly rupture rate for this weld was calculated instead of the increasing rupture rate trend one might expect. We attribute this result to infant mortality; that is, most of those initial flaws that will eventually lead to rupture will do so early in life. This means that although each weld in a population may be wearing out, the population as a whole can exhibit a decreasing rupture rate. This observation has implications for passive components in commercial nuclear plants and other facilities where aging is a concern. For the population of passive components that exhibit a decreasing failure rate, risk increase is not a concern. The next step of the work is to identify the attributes that contribute to this decreasing rate, and to determine any attributes that would contribute to an increasing failure rate and thus to an increased risk

  12. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    Science.gov (United States)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.

  13. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  14. Multi parametric sensitivity study applied to temperature measurement of metallic plasma facing components in fusion devices

    International Nuclear Information System (INIS)

    Aumeunier, M-H.; Corre, Y.; Firdaouss, M.; Gauthier, E.; Loarer, T.; Travere, J-M.; Gardarein, J-L.; EFDA JET Contributor

    2013-06-01

    In nuclear fusion experiments, the protection system of the Plasma Facing Components (PFCs) is commonly ensured by infrared (IR) thermography. Nevertheless, the surface monitoring of new metallic plasma facing component, as in JET and ITER is being challenging. Indeed, the analysis of infrared signals is made more complicated in such a metallic environment since the signals will be perturbed by the reflected photons coming from high temperature regions. To address and anticipate this new measurement environment, predictive photonic models, based on Monte-Carlo ray tracing (SPEOS R CAA V5 Based), have been performed to assess the contribution of the reflective part in the total flux collected by the camera and the resulting temperature error. This paper deals with the effects of metals features, as the emissivity and reflectivity models, on the accuracy of the surface temperature estimation. The reliability of the features models is discussed by comparing the simulation with experimental data obtained with the wide angle IR thermography system of JET ITER like wall. The impact of the temperature distribution is studied by considering two different typical plasma scenarios, in limiter (ITER start-up scenario) and in X-point configurations (standard divertor scenario). The achievable measurement performances of IR system and risks analysis on its functionalities are discussed. (authors)

  15. X-ray fluorescence spectrometry applied to soil analysis

    International Nuclear Information System (INIS)

    Salvador, Vera Lucia Ribeiro; Sato, Ivone Mulako; Scapin Junior, Wilson Santo; Scapin, Marcos Antonio; Imakima, Kengo

    1997-01-01

    This paper studies the X-ray fluorescence spectrometry applied to the soil analysis. A comparative study of the WD-XRFS and ED-XRFS techniques was carried out by using the following soil samples: SL-1, SOIL-7 and marine sediment SD-M-2/TM, from IAEA, and clay, JG-1a from Geological Survey of Japan (GSJ)

  16. Neutron activation analysis applied to energy and environment

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1975-01-01

    Neutron activation analysis was applied to a number of problems concerned with energy production and the environment. Burning of fossil fuel, the search for new sources of uranium, possible presence of toxic elements in food and water, and the relationship of trace elements to cardiovascular disease are some of the problems in which neutron activation was used. (auth)

  17. Research in progress in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1990-01-01

    Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.

  18. CLASSIFICATION OF LIDAR DATA OVER BUILDING ROOFS USING K-MEANS AND PRINCIPAL COMPONENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renato César dos Santos

    Full Text Available Abstract: The classification is an important step in the extraction of geometric primitives from LiDAR data. Normally, it is applied for the identification of points sampled on geometric primitives of interest. In the literature there are several studies that have explored the use of eigenvalues to classify LiDAR points into different classes or structures, such as corner, edge, and plane. However, in some works the classes are defined considering an ideal geometry, which can be affected by the inadequate sampling and/or by the presence of noise when using real data. To overcome this limitation, in this paper is proposed the use of metrics based on eigenvalues and the k-means method to carry out the classification. So, the concept of principal component analysis is used to obtain the eigenvalues and the derived metrics, while the k-means is applied to cluster the roof points in two classes: edge and non-edge. To evaluate the proposed method four test areas with different levels of complexity were selected. From the qualitative and quantitative analyses, it could be concluded that the proposed classification procedure gave satisfactory results, resulting in completeness and correctness above 92% for the non-edge class, and between 61% to 98% for the edge class.

  19. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  20. Principal component analysis of tomato genotypes based on some morphological and biochemical quality indicators

    Directory of Open Access Journals (Sweden)

    Glogovac Svetlana

    2012-01-01

    Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.

  1. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  2. Development status of component reliability database for Korean NPPs and a case study

    International Nuclear Information System (INIS)

    Choi, S. Y.; Yang, S. H.; Lee, S. C.; Kim, S. H.; Han, S. H.

    1999-01-01

    We have applied a generic database to the PSA (Probabilistic Safety Assessment) for the Korean Standard NPPs (Nuclear Power Plant) since there is no specific component reliability database. However generic data is not enough to reflect the specific characteristics of domestic plants since it is collected by foreign plants. Therefore we are developing the plant-specific component reliability database for domestic NPPs. In this paper, we describe the development status of the component reliability database and the approach method of data collection and component failure analysis. We also summarize a case study of component failure analysis. We first collect the failure and repair data from the TR (Trouble Report) electronic database and the daily operation report sheet. Now we add a data collection method that checks the original TR sheet to improve the data quality. We input the component failure and repair data of principal components of about 30 systems into the component reliability database. Now, we are analyzing the component failure data of 11 safety systems among the systems to calculate component failure rate and unavailability etc

  3. Principal Component Analysis to Explore Climatic Variability and Dengue Outbreak in Lahore

    Directory of Open Access Journals (Sweden)

    Syed Afrozuddin Ahmed

    2014-08-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan.  In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. Principal component analysis is performed for the purpose of finding if there is/are any general environmental factor/structure which could be affected in the emergence of dengue fever cases in Pakistani climate. Principal component is applied to find structure in data for all four periods i.e. 1980 to 2012, 1980 to 1995 and 1996 to 2012.  The first three PCs for the period (1980-2012, 1980-1994, 1995-2012 are almost the same and it represent hot and windy weather. The PC1s of all dengue periods are different to each other. PC2 for all period are same and it is wetness in weather. PC3s are different and it is the combination of wetness and windy weather. PC4s for all period show humid but no rain in weather. For climatic variable only minimum temperature and maximum temperature are significantly correlated with daily dengue cases.  PC1, PC3 and PC4 are highly significantly correlated with daily dengue cases 

  4. Application of the PISC results and methodology to assess the effectiveness of NDT techniques applied on non nuclear components

    International Nuclear Information System (INIS)

    Maciga, G.; Papponetti, M.; Crutzen, S.; Jehenson, P.

    1990-01-01

    Performance demonstration for NDT has been an active topic for several years. Interest in it came to the fore in the early 1980's when several institutions started to propose to use of realistic training assemblies and the formal approach of Validation Centers. These steps were justified for example by the results of the PISC exercises which concluded that there was a need for performance demonstration starting with capability assessment of techniques and procedure as they were routinely applied. If the PISC programme is put under the general ''Nuclear Motivation'', the PISC Methodology could be extended to problems to structural components in general, such as on conventional power plants, chemical, aerospace and offshore industries, where integrity and safety have regarded as being of great importance. Some themes of NDT inspections of fossil power plant and offshore components that could be objects of validation studies will be illustrated. (author)

  5. Fatigue characterization of mechanical components in service

    Directory of Open Access Journals (Sweden)

    G. Fargione

    2013-10-01

    Full Text Available The quickly identify of fatigue limit of a mechanical component with good approximation is currently a significant practical problem not yet resolved in a satisfactory way. Generally, for a mechanical component, the fatigue strength reduction factor (i is difficult to evaluate especially when it is in service.In this paper, the procedures for crack paths individuation and consequently damage evaluation (adopted in laboratory for stressed specimens with planned load histories are applied to mechanical components, already failed during service. The energy parameters, proposed by the authors for the evaluation of the fatigue behavior of the materials [1-5], are defined on specimens derived from a flange bolts. The flange connecting pipes at high temperature and pressure. Due to the loss of the seal, the bolts have been subjected to a hot flow steam addition to the normal stress.The numerical analysis coupled experimental analysis (measurement of surface temperature during static and dynamic tests of specimens taken from damaged tie rods, has helped to determine the causes of failure of the tie rods.The determination of an energy parameter for the evaluation of the damage showed that factors related to the heat release of the material (loaded may also help to understand the causes of failure of mechanical components.

  6. The Apply of Frequency Divider Circuit in Nuclear Electronics

    International Nuclear Information System (INIS)

    LIU Hefan; Zeng Bing; Zhang Ziliang; Ge Liangquan

    2009-01-01

    Different components in a digital system often need different working frequencies, the way we often used is clock division from the system clock. Through the analysis of frequency divider principle, a applied integer frequency dividing circuit with SE120A is proposed. It can divide the frequency multiple from 2 to 64. It's usually used in nuclear electronics. It's testing and analysis is displayed that it has no noise, good frequency division effect and stability. (authors)

  7. Fracture toughness evaluation of elastic-plastic J-integral for high temperature components of gas turbine in power plants

    International Nuclear Information System (INIS)

    Chung, Nam Yong; Kim, Moon Young; Kim, Jong Woo

    1999-01-01

    In the study, the analysis of elastic-plastic J-integral was performed in high temperature components for gas turbine based on elastic-plastic fracture mechanics. It had been operated on the range of about 700 deg C and degraded by high temperature. It was tested for material properties of used component because of material properties changing at high temperature condition. The elastic-plastic fracture mechanics parameter, J is obtained with finite element method. A method is suggested which determines J Ic applying analysis of elastic-plastic finite element method and results of experimental load-displacements with CT specimen. It is also investigated that J-integral is applied for the elastic-plastic analysis in high temperature components. The elastic-plastic fracture toughness. J Ic determined by finite element was obtained with high accuracy using the experimental method.=20

  8. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  9. Estimation of compound distribution in spectral images of tomatoes using independent component analysis

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.

    2003-01-01

    Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  10. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  11. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  12. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  13. Applications of the TVO piping and component analysis and monitoring system (PAMS)

    Energy Technology Data Exchange (ETDEWEB)

    Smeekes, P. (Teollisuuden Voima Oy, Olkiluoto (Finland)); Kuuluvainen, O. (Rostedt Oy, Luvia (Finland)); Torkkeli, E. (FEMdata Oy, Haukilahti (Finland))

    2010-05-15

    To make fitness, safety and lifetime related assessments for piping and components, the amount of data to be managed is getting larger and larger. At the same time it is essential that the data is reliable, up-to-date, well traceable and easy and fast to obtain. At present the main focus of PAMS is still on piping, but in the future the component related databases and applications will be more and more developed. This paper presents a piping and component database system, consisting of separate geometrical, material, loading, result and document databases as well as current and future applications of the system. By means of a user configurable interface program the user can generate indata files, run application programs and define what data to write back into the result database. The data in the result database can subsequently be used in new input files to perform postprocessing on previous results, for instance fatigue analysis. crack growth analysis or RI-ISI. The system is intended to facilitate the analyses of piping and components and generate well-documented appendices comprising significant parts of the input and output and the associated source references. (orig.)

  14. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    Science.gov (United States)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  15. Repair process and a repaired component

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, III, Herbert Chidsey; Simpson, Stanley F.

    2018-02-20

    Matrix composite component repair processes are disclosed. The matrix composite repair process includes applying a repair material to a matrix composite component, securing the repair material to the matrix composite component with an external securing mechanism and curing the repair material to bond the repair material to the matrix composite component during the securing by the external securing mechanism. The matrix composite component is selected from the group consisting of a ceramic matrix composite, a polymer matrix composite, and a metal matrix composite. In another embodiment, the repair process includes applying a partially-cured repair material to a matrix composite component, and curing the repair material to bond the repair material to the matrix composite component, an external securing mechanism securing the repair material throughout a curing period, In another embodiment, the external securing mechanism is consumed or decomposed during the repair process.

  16. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  17. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  18. A review of the reliability analysis of LPRS including the components repairs

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.

    1983-01-01

    The reliability analysis of low pressure recirculation system in its long-term recicurlation phase before 24hs is presented. The possibility of repairing the components out of the containment is included. A general revision of analysis of the short-term recirculation phase is done. (author) [pt

  19. International publication trends in the Journal of Applied Behavior Analysis: 2000-2014.

    Science.gov (United States)

    Martin, Neil T; Nosik, Melissa R; Carr, James E

    2016-06-01

    Dymond, Clarke, Dunlap, and Steiner's (2000) analysis of international publication trends in the Journal of Applied Behavior Analysis (JABA) from 1970 to 1999 revealed low numbers of publications from outside North America, leading the authors to express concern about the lack of international involvement in applied behavior analysis. They suggested that a future review would be necessary to evaluate any changes in international authorship in the journal. As a follow-up, we analyzed non-U.S. publication trends in the most recent 15 years of JABA and found similar results. We discuss potential reasons for the relative paucity of international authors and suggest potential strategies for increasing non-U.S. contributions to the advancement of behavior analysis. © 2015 Society for the Experimental Analysis of Behavior.

  20. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)