WorldWideScience

Sample records for principal component-based radiative

  1. Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis

    Science.gov (United States)

    Liu, X.; Wu, W.; Yang, Q.

    2017-12-01

    Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.

  2. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  3. Linearization of the Principal Component Analysis method for radiative transfer acceleration: Application to retrieval algorithms and sensitivity studies

    International Nuclear Information System (INIS)

    Spurr, R.; Natraj, V.; Lerot, C.; Van Roozendael, M.; Loyola, D.

    2013-01-01

    Principal Component Analysis (PCA) is a promising tool for enhancing radiative transfer (RT) performance. When applied to binned optical property data sets, PCA exploits redundancy in the optical data, and restricts the number of full multiple-scatter calculations to those optical states corresponding to the most important principal components, yet still maintaining high accuracy in the radiance approximations. We show that the entire PCA RT enhancement process is analytically differentiable with respect to any atmospheric or surface parameter, thus allowing for accurate and fast approximations of Jacobian matrices, in addition to radiances. This linearization greatly extends the power and scope of the PCA method to many remote sensing retrieval applications and sensitivity studies. In the first example, we examine accuracy for PCA-derived UV-backscatter radiance and Jacobian fields over a 290–340 nm window. In a second application, we show that performance for UV-based total ozone column retrieval is considerably improved without compromising the accuracy. -- Highlights: •Principal Component Analysis (PCA) of spectrally-binned atmospheric optical properties. •PCA-based accelerated radiative transfer with 2-stream model for fast multiple-scatter. •Atmospheric and surface property linearization of this PCA performance enhancement. •Accuracy of PCA enhancement for radiances and bulk-property Jacobians, 290–340 nm. •Application of PCA speed enhancement to UV backscatter total ozone retrievals

  4. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  5. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  6. Optimal pattern synthesis for speech recognition based on principal component analysis

    Science.gov (United States)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  7. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  8. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  9. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  10. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    Directory of Open Access Journals (Sweden)

    Suwicha Jirayucharoensak

    2014-01-01

    Full Text Available Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers.

  11. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  12. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  13. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    Science.gov (United States)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  14. Efficient real time OD matrix estimation based on principal component analysis

    NARCIS (Netherlands)

    Djukic, T.; Flötteröd, G.; Van Lint, H.; Hoogendoorn, S.P.

    2012-01-01

    In this paper we explore the idea of dimensionality reduction and approximation of OD demand based on principal component analysis (PCA). First, we show how we can apply PCA to linearly transform the high dimensional OD matrices into the lower dimensional space without significant loss of accuracy.

  15. Principal component analysis of tomato genotypes based on some morphological and biochemical quality indicators

    Directory of Open Access Journals (Sweden)

    Glogovac Svetlana

    2012-01-01

    Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.

  16. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  17. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    Science.gov (United States)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  18. Principal components

    NARCIS (Netherlands)

    Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.

    2012-01-01

    Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal

  19. Power Transformer Differential Protection Based on Neural Network Principal Component Analysis, Harmonic Restraint and Park's Plots

    OpenAIRE

    Tripathy, Manoj

    2012-01-01

    This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...

  20. Research on Air Quality Evaluation based on Principal Component Analysis

    Science.gov (United States)

    Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan

    2018-01-01

    Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.

  1. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  2. Principal and secondary luminescence lifetime components in annealed natural quartz

    International Nuclear Information System (INIS)

    Chithambo, M.L.; Ogundare, F.O.; Feathers, J.

    2008-01-01

    Time-resolved luminescence spectra from quartz can be separated into components with distinct principal and secondary lifetimes depending on certain combinations of annealing and measurement temperature. The influence of annealing on properties of the lifetimes related to irradiation dose and temperature of measurement has been investigated in sedimentary quartz annealed at various temperatures up to 900 deg. C. Time-resolved luminescence for use in the analysis was pulse stimulated from samples at 470 nm between 20 and 200 deg. C. Luminescence lifetimes decrease with measurement temperature due to increasing thermal effect on the associated luminescence with an activation energy of thermal quenching equal to 0.68±0.01eV for the secondary lifetime but only qualitatively so for the principal lifetime component. Concerning the influence of annealing temperature, luminescence lifetimes measured at 20 deg. C are constant at about 33μs for annealing temperatures up to 600 0 C but decrease to about 29μs when the annealing temperature is increased to 900 deg. C. In addition, it was found that lifetime components in samples annealed at 800 deg. C are independent of radiation dose in the range 85-1340 Gy investigated. The dependence of lifetimes on both the annealing temperature and magnitude of radiation dose is described as being due to the increasing importance of a particular recombination centre in the luminescence emission process as a result of dynamic hole transfer between non-radiative and radiative luminescence centres

  3. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  4. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  5. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  6. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  7. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  8. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    Science.gov (United States)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  9. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  10. Water reuse systems: A review of the principal components

    Science.gov (United States)

    Lucchetti, G.; Gray, G.A.

    1988-01-01

    Principal components of water reuse systems include ammonia removal, disease control, temperature control, aeration, and particulate filtration. Effective ammonia removal techniques include air stripping, ion exchange, and biofiltration. Selection of a particular technique largely depends on site-specific requirements (e.g., space, existing water quality, and fish densities). Disease control, although often overlooked, is a major problem in reuse systems. Pathogens can be controlled most effectively with ultraviolet radiation, ozone, or chlorine. Simple and inexpensive methods are available to increase oxygen concentration and eliminate gas supersaturation, these include commercial aerators, air injectors, and packed columns. Temperature control is a major advantage of reuse systems, but the equipment required can be expensive, particularly if water temperature must be rigidly controlled and ambient air temperature fluctuates. Filtration can be readily accomplished with a hydrocyclone or sand filter that increases overall system efficiency. Based on criteria of adaptability, efficiency, and reasonable cost, we recommend components for a small water reuse system.

  11. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  12. INDIA’S ELECTRICITY DEMAND FORECAST USING REGRESSION ANALYSIS AND ARTIFICIAL NEURAL NETWORKS BASED ON PRINCIPAL COMPONENTS

    Directory of Open Access Journals (Sweden)

    S. Saravanan

    2012-07-01

    Full Text Available Power System planning starts with Electric load (demand forecasting. Accurate electricity load forecasting is one of the most important challenges in managing supply and demand of the electricity, since the electricity demand is volatile in nature; it cannot be stored and has to be consumed instantly. The aim of this study deals with electricity consumption in India, to forecast future projection of demand for a period of 19 years from 2012 to 2030. The eleven input variables used are Amount of CO2 emission, Population, Per capita GDP, Per capita gross national income, Gross Domestic savings, Industry, Consumer price index, Wholesale price index, Imports, Exports and Per capita power consumption. A new methodology based on Artificial Neural Networks (ANNs using principal components is also used. Data of 29 years used for training and data of 10 years used for testing the ANNs. Comparison made with multiple linear regression (based on original data and the principal components and ANNs with original data as input variables. The results show that the use of ANNs with principal components (PC is more effective.

  13. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  14. Principal Component Analysis Based Two-Dimensional (PCA-2D) Correlation Spectroscopy: PCA Denoising for 2D Correlation Spectroscopy

    International Nuclear Information System (INIS)

    Jung, Young Mee

    2003-01-01

    Principal component analysis based two-dimensional (PCA-2D) correlation analysis is applied to FTIR spectra of polystyrene/methyl ethyl ketone/toluene solution mixture during the solvent evaporation. Substantial amount of artificial noise were added to the experimental data to demonstrate the practical noise-suppressing benefit of PCA-2D technique. 2D correlation analysis of the reconstructed data matrix from PCA loading vectors and scores successfully extracted only the most important features of synchronicity and asynchronicity without interference from noise or insignificant minor components. 2D correlation spectra constructed with only one principal component yield strictly synchronous response with no discernible a asynchronous features, while those involving at least two or more principal components generated meaningful asynchronous 2D correlation spectra. Deliberate manipulation of the rank of the reconstructed data matrix, by choosing the appropriate number and type of PCs, yields potentially more refined 2D correlation spectra

  15. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  16. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  17. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  18. Association test based on SNP set: logistic kernel machine based test vs. principal component analysis.

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    Full Text Available GWAS has facilitated greatly the discovery of risk SNPs associated with complex diseases. Traditional methods analyze SNP individually and are limited by low power and reproducibility since correction for multiple comparisons is necessary. Several methods have been proposed based on grouping SNPs into SNP sets using biological knowledge and/or genomic features. In this article, we compare the linear kernel machine based test (LKM and principal components analysis based approach (PCA using simulated datasets under the scenarios of 0 to 3 causal SNPs, as well as simple and complex linkage disequilibrium (LD structures of the simulated regions. Our simulation study demonstrates that both LKM and PCA can control the type I error at the significance level of 0.05. If the causal SNP is in strong LD with the genotyped SNPs, both the PCA with a small number of principal components (PCs and the LKM with kernel of linear or identical-by-state function are valid tests. However, if the LD structure is complex, such as several LD blocks in the SNP set, or when the causal SNP is not in the LD block in which most of the genotyped SNPs reside, more PCs should be included to capture the information of the causal SNP. Simulation studies also demonstrate the ability of LKM and PCA to combine information from multiple causal SNPs and to provide increased power over individual SNP analysis. We also apply LKM and PCA to analyze two SNP sets extracted from an actual GWAS dataset on non-small cell lung cancer.

  19. A new methodology based on functional principal component analysis to study postural stability post-stroke.

    Science.gov (United States)

    Sánchez-Sánchez, M Luz; Belda-Lois, Juan-Manuel; Mena-Del Horno, Silvia; Viosca-Herrero, Enrique; Igual-Camacho, Celedonia; Gisbert-Morant, Beatriz

    2018-05-05

    A major goal in stroke rehabilitation is the establishment of more effective physical therapy techniques to recover postural stability. Functional Principal Component Analysis provides greater insight into recovery trends. However, when missing values exist, obtaining functional data presents some difficulties. The purpose of this study was to reveal an alternative technique for obtaining the Functional Principal Components without requiring the conversion to functional data beforehand and to investigate this methodology to determine the effect of specific physical therapy techniques in balance recovery trends in elderly subjects with hemiplegia post-stroke. A randomized controlled pilot trial was developed. Thirty inpatients post-stroke were included. Control and target groups were treated with the same conventional physical therapy protocol based on functional criteria, but specific techniques were added to the target group depending on the subjects' functional level. Postural stability during standing was quantified by posturography. The assessments were performed once a month from the moment the participants were able to stand up to six months post-stroke. The target group showed a significant improvement in postural control recovery trend six months after stroke that was not present in the control group. Some of the assessed parameters revealed significant differences between treatment groups (P Functional Principal Component Analysis to be performed when data is scarce. Moreover, it allowed the dynamics of recovery of two different treatment groups to be determined, showing that the techniques added in the target group increased postural stability compared to the base protocol. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Principal components analysis based control of a multi-dof underactuated prosthetic hand

    Directory of Open Access Journals (Sweden)

    Magenes Giovanni

    2010-04-01

    Full Text Available Abstract Background Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG. Driving a multi degrees of freedom (DoF hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user. Methods A Principal Components Analysis (PCA based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs. Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control. Results Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture may be achieved. Conclusions This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.

  1. Extraction of Independent Structural Images for Principal Component Thermography

    Directory of Open Access Journals (Sweden)

    Dmitry Gavrilov

    2018-03-01

    Full Text Available Thermography is a powerful tool for non-destructive testing of a wide range of materials. Thermography has a number of approaches differing in both experiment setup and the way the collected data are processed. Among such approaches is the Principal Component Thermography (PCT method, which is based on the statistical processing of raw thermal images collected by thermal camera. The processed images (principal components or empirical orthogonal functions form an orthonormal basis, and often look like a superposition of all possible structural features found in the object under inspection—i.e., surface heating non-uniformity, internal defects and material structure. At the same time, from practical point of view it is desirable to have images representing independent structural features. The work presented in this paper proposes an approach for separation of independent image patterns (archetypes from a set of principal component images. The approach is demonstrated in the application of inspection of composite materials as well as the non-invasive analysis of works of art.

  2. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  3. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  4. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  5. Trend extraction of rail corrugation measured dynamically based on the relevant low-frequency principal components reconstruction

    International Nuclear Information System (INIS)

    Li, Yanfu; Liu, Hongli; Ma, Ziji

    2016-01-01

    Rail corrugation dynamic measurement techniques are critical to guarantee transport security and guide rail maintenance. During the inspection process, low-frequency trends caused by rail fluctuation are usually superimposed on rail corrugation and seriously affect the assessment of rail maintenance quality. In order to extract and remove the nonlinear and non-stationary trends from original mixed signals, a hybrid model based ensemble empirical mode decomposition (EEMD) and modified principal component analysis (MPCA) is proposed in this paper. Compared with the existing de-trending methods based on EMD, this method first considers low-frequency intrinsic mode functions (IMFs) thought to be underlying trend components that maybe contain some unrelated components, such as white noise and low-frequency signal itself, and proposes to use PCA to accurately extract the pure trends from the IMFs containing multiple components. On the other hand, due to the energy contribution ratio between trends and mixed signals is prior unknown, and the principal components (PCs) decomposed by PCA are arranged in order of energy reduction without considering frequency distribution, the proposed method modifies traditional PCA and just selects relevant low-frequency PCs to reconstruct the trends based on the zero-crossing numbers (ZCN) of each PC. Extensive tests are presented to illustrate the effectiveness of the proposed method. The results show the proposed EEMD-PCA-ZCN is an effective tool for trend extraction of rail corrugation measured dynamically. (paper)

  6. Process parameter optimization based on principal components analysis during machining of hardened steel

    Directory of Open Access Journals (Sweden)

    Suryakant B. Chandgude

    2015-09-01

    Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.

  7. Selecting the Number of Principal Components in Functional Data

    KAUST Repository

    Li, Yehua

    2013-12-01

    Functional principal component analysis (FPCA) has become the most widely used dimension reduction tool for functional data analysis. We consider functional data measured at random, subject-specific time points, contaminated with measurement error, allowing for both sparse and dense functional data, and propose novel information criteria to select the number of principal component in such data. We propose a Bayesian information criterion based on marginal modeling that can consistently select the number of principal components for both sparse and dense functional data. For dense functional data, we also develop an Akaike information criterion based on the expected Kullback-Leibler information under a Gaussian assumption. In connecting with the time series literature, we also consider a class of information criteria proposed for factor analysis of multivariate time series and show that they are still consistent for dense functional data, if a prescribed undersmoothing scheme is undertaken in the FPCA algorithm. We perform intensive simulation studies and show that the proposed information criteria vastly outperform existing methods for this type of data. Surprisingly, our empirical evidence shows that our information criteria proposed for dense functional data also perform well for sparse functional data. An empirical example using colon carcinogenesis data is also provided to illustrate the results. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  8. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  9. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  10. Principal component regression for crop yield estimation

    CERN Document Server

    Suryanarayana, T M V

    2016-01-01

    This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...

  11. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  12. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  13. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  14. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  15. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  16. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  17. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    International Nuclear Information System (INIS)

    Chetvertkov, Mikhail A.; Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James; Kim, Jinkoo

    2016-01-01

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  18. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    Energy Technology Data Exchange (ETDEWEB)

    Chetvertkov, Mikhail A., E-mail: chetvertkov@wayne.edu [Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan 48201 and Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Kim, Jinkoo [Department of Radiation Oncology, Stony Brook University Hospital, Stony Brook, New York 11794 (United States)

    2016-10-15

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  19. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  20. Characteristic gene selection via weighting principal components by singular values.

    Directory of Open Access Journals (Sweden)

    Jin-Xing Liu

    Full Text Available Conventional gene selection methods based on principal component analysis (PCA use only the first principal component (PC of PCA or sparse PCA to select characteristic genes. These methods indeed assume that the first PC plays a dominant role in gene selection. However, in a number of cases this assumption is not satisfied, so the conventional PCA-based methods usually provide poor selection results. In order to improve the performance of the PCA-based gene selection method, we put forward the gene selection method via weighting PCs by singular values (WPCS. Because different PCs have different importance, the singular values are exploited as the weights to represent the influence on gene selection of different PCs. The ROC curves and AUC statistics on artificial data show that our method outperforms the state-of-the-art methods. Moreover, experimental results on real gene expression data sets show that our method can extract more characteristic genes in response to abiotic stresses than conventional gene selection methods.

  1. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  2. Recognition of grasp types through principal components of DWT based EMG features.

    Science.gov (United States)

    Kakoty, Nayan M; Hazarika, Shyamanta M

    2011-01-01

    With the advancement in machine learning and signal processing techniques, electromyogram (EMG) signals have increasingly gained importance in man-machine interaction. Multifingered hand prostheses using surface EMG for control has appeared in the market. However, EMG based control is still rudimentary, being limited to a few hand postures based on higher number of EMG channels. Moreover, control is non-intuitive, in the sense that the user is required to learn to associate muscle remnants actions to unrelated posture of the prosthesis. Herein lies the promise of a low channel EMG based grasp classification architecture for development of an embedded intelligent prosthetic controller. This paper reports classification of six grasp types used during 70% of daily living activities based on two channel forearm EMG. A feature vector through principal component analysis of discrete wavelet transform coefficients based features of the EMG signal is derived. Classification is through radial basis function kernel based support vector machine following preprocessing and maximum voluntary contraction normalization of EMG signals. 10-fold cross validation is done. We have achieved an average recognition rate of 97.5%. © 2011 IEEE

  3. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging.

    Science.gov (United States)

    Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning

    2018-04-10

    Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using

  4. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  5. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    Science.gov (United States)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  6. An Introductory Application of Principal Components to Cricket Data

    Science.gov (United States)

    Manage, Ananda B. W.; Scariano, Stephen M.

    2013-01-01

    Principal Component Analysis is widely used in applied multivariate data analysis, and this article shows how to motivate student interest in this topic using cricket sports data. Here, principal component analysis is successfully used to rank the cricket batsmen and bowlers who played in the 2012 Indian Premier League (IPL) competition. In…

  7. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  8. Contact- and distance-based principal component analysis of protein dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Matthias; Sittel, Florian; Stock, Gerhard, E-mail: stock@physik.uni-freiburg.de [Biomolecular Dynamics, Institute of Physics, Albert Ludwigs University, 79104 Freiburg (Germany)

    2015-12-28

    To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.

  9. An analytics of electricity consumption characteristics based on principal component analysis

    Science.gov (United States)

    Feng, Junshu

    2018-02-01

    Abstract . More detailed analysis of the electricity consumption characteristics can make demand side management (DSM) much more targeted. In this paper, an analytics of electricity consumption characteristics based on principal component analysis (PCA) is given, which the PCA method can be used in to extract the main typical characteristics of electricity consumers. Then, electricity consumption characteristics matrix is designed, which can make a comparison of different typical electricity consumption characteristics between different types of consumers, such as industrial consumers, commercial consumers and residents. In our case study, the electricity consumption has been mainly divided into four characteristics: extreme peak using, peak using, peak-shifting using and others. Moreover, it has been found that industrial consumers shift their peak load often, meanwhile commercial and residential consumers have more peak-time consumption. The conclusions can provide decision support of DSM for the government and power providers.

  10. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  11. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  12. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

    OpenAIRE

    Geroukis, Asterios; Brorson, Erik

    2014-01-01

    In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

  13. Impact of Bone Marrow Radiation Dose on Acute Hematologic Toxicity in Cervical Cancer: Principal Component Analysis on High Dimensional Data

    International Nuclear Information System (INIS)

    Yun Liang; Messer, Karen; Rose, Brent S.; Lewis, John H.; Jiang, Steve B.; Yashar, Catheryn M.; Mundt, Arno J.; Mell, Loren K.

    2010-01-01

    Purpose: To study the effects of increasing pelvic bone marrow (BM) radiation dose on acute hematologic toxicity in patients undergoing chemoradiotherapy, using a novel modeling approach to preserve the local spatial dose information. Methods and Materials: The study included 37 cervical cancer patients treated with concurrent weekly cisplatin and pelvic radiation therapy. The white blood cell count nadir during treatment was used as the indicator for acute hematologic toxicity. Pelvic BM radiation dose distributions were standardized across patients by registering the pelvic BM volumes to a common template, followed by dose remapping using deformable image registration, resulting in a dose array. Principal component (PC) analysis was applied to the dose array, and the significant eigenvectors were identified by linear regression on the PCs. The coefficients for PC regression and significant eigenvectors were represented in three dimensions to identify critical BM subregions where dose accumulation is associated with hematologic toxicity. Results: We identified five PCs associated with acute hematologic toxicity. PC analysis regression modeling explained a high proportion of the variation in acute hematologicity (adjusted R 2 , 0.49). Three-dimensional rendering of a linear combination of the significant eigenvectors revealed patterns consistent with anatomical distributions of hematopoietically active BM. Conclusions: We have developed a novel approach that preserves spatial dose information to model effects of radiation dose on toxicity, which may be useful in optimizing radiation techniques to avoid critical subregions of normal tissues. Further validation of this approach in a large cohort is ongoing.

  14. INCREMENTAL PRINCIPAL COMPONENT ANALYSIS BASED OUTLIER DETECTION METHODS FOR SPATIOTEMPORAL DATA STREAMS

    Directory of Open Access Journals (Sweden)

    A. Bhushan

    2015-07-01

    Full Text Available In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.

  15. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  16. On combining principal components with Fisher's linear discriminants for supervised learning

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.

    2006-01-01

    "The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic increase of computational complexity and classification error in high dimensions. In this paper, principal component analysis (PCA), parametric feature extraction (FE) based on Fisher’s linear

  17. Fault diagnosis of main coolant pump in the nuclear power station based on the principal component analysis

    International Nuclear Information System (INIS)

    Feng Junting; Xu Mi; Wang Guizeng

    2003-01-01

    The fault diagnosis method based on principal component analysis is studied. The fault character direction storeroom of fifteen parameters abnormity is built in the simulation for the main coolant pump of nuclear power station. The measuring data are analyzed, and the results show that it is feasible for the fault diagnosis system of main coolant pump in the nuclear power station

  18. On the structure of dynamic principal component analysis used in statistical process monitoring

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne

    2017-01-01

    When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...

  19. The Mediterranean Oscillation Teleconnection Index: Station-Based versus Principal Component Paradigms

    Directory of Open Access Journals (Sweden)

    Francisco Criado-Aldeanueva

    2013-01-01

    Full Text Available Two different paradigms of the Mediterranean Oscillation (MO teleconnection index have been compared in this work: station-based definitions obtained by the difference of some climate variable between two selected points in the eastern and western basins (i.e., Algiers and Cairo, Gibraltar and Israel, Marseille and Jerusalem, or south France and Levantine basin and the principal component (PC approach in which the index is obtained as the time series of the first mode of normalised sea level pressure anomalies across the extended Mediterranean region. Interannual to interdecadal precipitation (P, evaporation (E, E-P, and net heat flux have been correlated with the different MO indices to compare their relative importance in the long-term variability of heat and freshwater budgets over the Mediterranean Sea. On an annual basis, the PC paradigm is the most effective tool to assess the effect of the large-scale atmospheric forcing in the Mediterranean Sea because the station-based indices exhibit a very poor correlation with all climatic variables and only influence a reduced fraction of the basin. In winter, the station-based indices highly improve their ability to represent the atmospheric forcing and results are fairly independent of the paradigm used.

  20. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  1. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  2. Principal Components as a Data Reduction and Noise Reduction Technique

    Science.gov (United States)

    Imhoff, M. L.; Campbell, W. J.

    1982-01-01

    The potential of principal components as a pipeline data reduction technique for thematic mapper data was assessed and principal components analysis and its transformation as a noise reduction technique was examined. Two primary factors were considered: (1) how might data reduction and noise reduction using the principal components transformation affect the extraction of accurate spectral classifications; and (2) what are the real savings in terms of computer processing and storage costs of using reduced data over the full 7-band TM complement. An area in central Pennsylvania was chosen for a study area. The image data for the project were collected using the Earth Resources Laboratory's thematic mapper simulator (TMS) instrument.

  3. Principal component analysis for neural electron/jet discrimination in highly segmented calorimeters

    International Nuclear Information System (INIS)

    Vassali, M.R.; Seixas, J.M.

    2001-01-01

    A neural electron/jet discriminator based on calorimetry is developed for the second-level trigger system of the ATLAS detector. As preprocessing of the calorimeter information, a principal component analysis is performed on each segment of the two sections (electromagnetic and hadronic) of the calorimeter system, in order to reduce significantly the dimension of the input data space and fully explore the detailed energy deposition profile, which is provided by the highly-segmented calorimeter system. It is shown that projecting calorimeter data onto 33 segmented principal components, the discrimination efficiency of the neural classifier reaches 98.9% for electrons (with only 1% of false alarm probability). Furthermore, restricting data projection onto only 9 components, an electron efficiency of 99.1% is achieved (with 3% of false alarm), which confirms that a fast triggering system may be designed using few components

  4. Fault detection of flywheel system based on clustering and principal component analysis

    Directory of Open Access Journals (Sweden)

    Wang Rixin

    2015-12-01

    Full Text Available Considering the nonlinear, multifunctional properties of double-flywheel with closed-loop control, a two-step method including clustering and principal component analysis is proposed to detect the two faults in the multifunctional flywheels. At the first step of the proposed algorithm, clustering is taken as feature recognition to check the instructions of “integrated power and attitude control” system, such as attitude control, energy storage or energy discharge. These commands will ask the flywheel system to work in different operation modes. Therefore, the relationship of parameters in different operations can define the cluster structure of training data. Ordering points to identify the clustering structure (OPTICS can automatically identify these clusters by the reachability-plot. K-means algorithm can divide the training data into the corresponding operations according to the reachability-plot. Finally, the last step of proposed model is used to define the relationship of parameters in each operation through the principal component analysis (PCA method. Compared with the PCA model, the proposed approach is capable of identifying the new clusters and learning the new behavior of incoming data. The simulation results show that it can effectively detect the faults in the multifunctional flywheels system.

  5. Aerodynamic multi-objective integrated optimization based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Jiangtao HUANG

    2017-08-01

    Full Text Available Based on improved multi-objective particle swarm optimization (MOPSO algorithm with principal component analysis (PCA methodology, an efficient high-dimension multi-objective optimization method is proposed, which, as the purpose of this paper, aims to improve the convergence of Pareto front in multi-objective optimization design. The mathematical efficiency, the physical reasonableness and the reliability in dealing with redundant objectives of PCA are verified by typical DTLZ5 test function and multi-objective correlation analysis of supercritical airfoil, and the proposed method is integrated into aircraft multi-disciplinary design (AMDEsign platform, which contains aerodynamics, stealth and structure weight analysis and optimization module. Then the proposed method is used for the multi-point integrated aerodynamic optimization of a wide-body passenger aircraft, in which the redundant objectives identified by PCA are transformed to optimization constraints, and several design methods are compared. The design results illustrate that the strategy used in this paper is sufficient and multi-point design requirements of the passenger aircraft are reached. The visualization level of non-dominant Pareto set is improved by effectively reducing the dimension without losing the primary feature of the problem.

  6. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  7. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  8. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  9. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  10. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    Science.gov (United States)

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  11. A Biometric Face Recognition System Using an Algorithm Based on the Principal Component Analysis Technique

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-06-01

    Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.

  12. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  13. Cloud Masking for Remotely Sensed Data Using Spectral and Principal Components Analysis

    Directory of Open Access Journals (Sweden)

    A. Ahmad

    2012-06-01

    Full Text Available Two methods of cloud masking tuned to tropical conditions have been developed, based on spectral analysis and Principal Components Analysis (PCA of Moderate Resolution Imaging Spectroradiometer (MODIS data. In the spectral approach, thresholds were applied to four reflective bands (1, 2, 3, and 4, three thermal bands (29, 31 and 32, the band 2/band 1 ratio, and the difference between band 29 and 31 in order to detect clouds. The PCA approach applied a threshold to the first principal component derived from the seven quantities used for spectral analysis. Cloud detections were compared with the standard MODIS cloud mask, and their accuracy was assessed using reference images and geographical information on the study area.

  14. Detecting Market Transitions and Energy Futures Risk Management Using Principal Components

    NARCIS (Netherlands)

    Borovkova, S.A.

    2006-01-01

    An empirical approach to analysing the forward curve dynamics of energy futures is presented. For non-seasonal commodities-such as crude oil-the forward curve is well described by the first three principal components: the level, slope and curvature. A principal component indicator is described that

  15. Design and validation of a morphing myoelectric hand posture controller based on principal component analysis of human grasping.

    Science.gov (United States)

    Segil, Jacob L; Weir, Richard F ff

    2014-03-01

    An ideal myoelectric prosthetic hand should have the ability to continuously morph between any posture like an anatomical hand. This paper describes the design and validation of a morphing myoelectric hand controller based on principal component analysis of human grasping. The controller commands continuously morphing hand postures including functional grasps using between two and four surface electromyography (EMG) electrodes pairs. Four unique maps were developed to transform the EMG control signals in the principal component domain. A preliminary validation experiment was performed by 10 nonamputee subjects to determine the map with highest performance. The subjects used the myoelectric controller to morph a virtual hand between functional grasps in a series of randomized trials. The number of joints controlled accurately was evaluated to characterize the performance of each map. Additional metrics were studied including completion rate, time to completion, and path efficiency. The highest performing map controlled over 13 out of 15 joints accurately.

  16. Principal component reconstruction (PCR) for cine CBCT with motion learning from 2D fluoroscopy.

    Science.gov (United States)

    Gao, Hao; Zhang, Yawei; Ren, Lei; Yin, Fang-Fang

    2018-01-01

    This work aims to generate cine CT images (i.e., 4D images with high-temporal resolution) based on a novel principal component reconstruction (PCR) technique with motion learning from 2D fluoroscopic training images. In the proposed PCR method, the matrix factorization is utilized as an explicit low-rank regularization of 4D images that are represented as a product of spatial principal components and temporal motion coefficients. The key hypothesis of PCR is that temporal coefficients from 4D images can be reasonably approximated by temporal coefficients learned from 2D fluoroscopic training projections. For this purpose, we can acquire fluoroscopic training projections for a few breathing periods at fixed gantry angles that are free from geometric distortion due to gantry rotation, that is, fluoroscopy-based motion learning. Such training projections can provide an effective characterization of the breathing motion. The temporal coefficients can be extracted from these training projections and used as priors for PCR, even though principal components from training projections are certainly not the same for these 4D images to be reconstructed. For this purpose, training data are synchronized with reconstruction data using identical real-time breathing position intervals for projection binning. In terms of image reconstruction, with a priori temporal coefficients, the data fidelity for PCR changes from nonlinear to linear, and consequently, the PCR method is robust and can be solved efficiently. PCR is formulated as a convex optimization problem with the sum of linear data fidelity with respect to spatial principal components and spatiotemporal total variation regularization imposed on 4D image phases. The solution algorithm of PCR is developed based on alternating direction method of multipliers. The implementation is fully parallelized on GPU with NVIDIA CUDA toolbox and each reconstruction takes about a few minutes. The proposed PCR method is validated and

  17. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  18. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  19. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  20. Finger crease pattern recognition using Legendre moments and principal component analysis

    Science.gov (United States)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  1. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  2. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  3. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    Science.gov (United States)

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Teaching Principal Components Using Correlations.

    Science.gov (United States)

    Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V

    2017-01-01

    Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.

  5. The use of principal components and univariate charts to control multivariate processes

    Directory of Open Access Journals (Sweden)

    Marcela A. G. Machado

    2008-04-01

    Full Text Available In this article, we evaluate the performance of the T² chart based on the principal components (PC X chart and the simultaneous univariate control charts based on the original variables (SU charts or based on the principal components (SUPC charts. The main reason to consider the PC chart lies on the dimensionality reduction. However, depending on the disturbance and on the way the original variables are related, the chart is very slow in signaling, except when all variables are negatively correlated and the principal component is wisely selected. Comparing the SU , the SUPC and the T² charts we conclude that the SU X charts (SUPC charts have a better overall performance when the variables are positively (negatively correlated. We also develop the expression to obtain the power of two S² charts designed for monitoring the covariance matrix. These joint S² charts are, in the majority of the cases, more efficient than the generalized variance chart.Neste artigo, avaliamos o desempenho do gráfico de T² baseado em componentes principais (gráfico PC e dos gráficos de controle simultâneos univariados baseados nas variáveis originais (gráfico SU X ou baseados em componentes principais (gráfico SUPC. A principal razão para o uso do gráfico PC é a redução de dimensionalidade. Entretanto, dependendo da perturbação e da correlação entre as variáveis originais, o gráfico é lento em sinalizar, exceto quando todas as variáveis são negativamente correlacionadas e a componente principal é adequadamente escolhida. Comparando os gráficos SU X, SUPC e T² concluímos que o gráfico SU X (gráfico SUPC tem um melhor desempenho global quando as variáveis são positivamente (negativamente correlacionadas. Desenvolvemos também uma expressão para obter o poder de detecção de dois gráficos de S² projetados para controlar a matriz de covariâncias. Os gráficos conjuntos de S² são, na maioria dos casos, mais eficientes que o gr

  6. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  7. A novel normalization method based on principal component analysis to reduce the effect of peak overlaps in two-dimensional correlation spectroscopy

    Science.gov (United States)

    Wang, Yanwei; Gao, Wenying; Wang, Xiaogong; Yu, Zhiwu

    2008-07-01

    Two-dimensional correlation spectroscopy (2D-COS) has been widely used to separate overlapped spectroscopic bands. However, band overlap may sometimes cause misleading results in the 2D-COS spectra, especially if one peak is embedded within another peak by the overlap. In this work, we propose a new normalization method, based on principal component analysis (PCA). For each spectrum under discussion, the first principal component of PCA is simply taken as the normalization factor of the spectrum. It is demonstrated that the method works well with simulated dynamic spectra. Successful result has also been obtained from the analysis of an overlapped band in the wavenumber range 1440-1486 cm -1 for the evaporation process of a solution containing behenic acid, methanol, and chloroform.

  8. Assessment of drinking water quality using principal component ...

    African Journals Online (AJOL)

    Assessment of drinking water quality using principal component analysis and partial least square discriminant analysis: a case study at water treatment plants, ... water and to detect the source of pollution for the most revealing parameters.

  9. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  10. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  11. Empirical research on financial capability evaluation of A-share listed companies in the securities industry based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Xiuping Wang

    2017-11-01

    Full Text Available Based on the relevant financial data indicators of A-share markets of Shanghai and Shenzhen in 2009, with all of 29 listed companies in the securities industry as the research objects, this paper selects 10variables that can fully reflect the financial capability indicators and uses the principal component analysis to carry out the empirical research on the financial capability. The research results show that the comprehensive financial capability of listed companies in A-share securities industry must be focused on the following four capabilities, investment and income, profit, capital composition and debt repayment and cash flow indicators. In addition, the principal component analysis can effectively evaluate the financial capability of listed companies in A-share securities industry, and solve the problems in the previous analysis methods, such as excessive indicators, information overlapping and so on.

  12. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  13. An improved principal component analysis based region matching method for fringe direction estimation

    Science.gov (United States)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  14. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    Science.gov (United States)

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  15. Influencing Factors of Catering and Food Service Industry Based on Principal Component Analysis

    OpenAIRE

    Zi Tang

    2014-01-01

    Scientific analysis of influencing factors is of great importance for the healthy development of catering and food service industry. This study attempts to present a set of critical indicators for evaluating the contribution of influencing factors to catering and food service industry in the particular context of Harbin City, Northeast China. Ten indicators that correlate closely with catering and food service industry were identified and performed by the principal component analysis method u...

  16. Identifying apple surface defects using principal components analysis and artifical neural networks

    Science.gov (United States)

    Artificial neural networks and principal components were used to detect surface defects on apples in near-infrared images. Neural networks were trained and tested on sets of principal components derived from columns of pixels from images of apples acquired at two wavelengths (740 nm and 950 nm). I...

  17. The principal phenolic and alcoholic components of wine protect human lymphocytes against hydrogen peroxide- and ionising radiation-induced DNA damage in vitro

    International Nuclear Information System (INIS)

    Fenech, M.; Greenrod, W.

    2003-01-01

    We have tested the hypothesis that the alcoholic and phenolic components of wine are protective against the DNA damaging and cytotoxic effects of hydrogen peroxide and gamma radiation in vitro. The components of wine tested were ethanol, glycerol, a mixture of the phenolic compounds catechin and caffeic acid, and tartaric acid, all at concentrations that were 2.5% or 10.0% of the concentration in a typical Australian white wine Riesling. These components were tested individually or combined as a mixture and compared to a white wine stripped of polyphenols as well as a Hanks balanced salt solution control which was the diluent for the wine components. The effect of the components was tested in lymphocytes, using the cytokinesis-block micronucleus assay, after 30 minutes incubation in plasma or whole blood for the hydrogen peroxide or gamma-radiation challenge respectively. The results obtained showed that ethanol, glycerol, the catechin-caffeic acid mixture, the mixture of all components, and the stripped white wine significantly reduced the DNA damaging effects of hydrogen peroxide and gamma radiation (ANOVA P = 0.043 - 0.001). The strongest protective effect against DNA damage by gamma irradiation was observed for the catechin-caffeic acid mixture and mixture of all components (30% and 32% reduction respectively). These two treatments as well as ethanol produced the strongest protective effects against DNA damage by hydrogen peroxide (24%, 25% and 18% respectively) . The protection provided by the mixture did not account for the expected additive protective effects of the individual components suggesting that the components may be exerting their effects through similar mechanisms which are saturated at the concentrations tested. Ethanol was the only component that significantly increased base-line DNA damage rate, however, this effect was negated in the mixture. In conclusion our results suggest that the main phenolic and alcoholic components of wine can reduce

  18. A multi-dimensional functional principal components analysis of EEG data.

    Science.gov (United States)

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  19. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  20. A kernel principal component analysis–based degradation model and remaining useful life estimation for the turbofan engine

    Directory of Open Access Journals (Sweden)

    Delong Feng

    2016-05-01

    Full Text Available Remaining useful life estimation of the prognostics and health management technique is a complicated and difficult research question for maintenance. In this article, we consider the problem of prognostics modeling and estimation of the turbofan engine under complicated circumstances and propose a kernel principal component analysis–based degradation model and remaining useful life estimation method for such aircraft engine. We first analyze the output data created by the turbofan engine thermodynamic simulation that is based on the kernel principal component analysis method and then distinguish the qualitative and quantitative relationships between the key factors. Next, we build a degradation model for the engine fault based on the following assumptions: the engine has only had constant failure (i.e. no sudden failure is included, and the engine has a Wiener process, which is a covariate stand for the engine system drift. To predict the remaining useful life of the turbofan engine, we built a health index based on the degradation model and used the method of maximum likelihood and the data from the thermodynamic simulation model to estimate the parameters of this degradation model. Through the data analysis, we obtained a trend model of the regression curve line that fits with the actual statistical data. Based on the predicted health index model and the data trend model, we estimate the remaining useful life of the aircraft engine as the index reaches zero. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this prediction method that we propose. At last, a case study involving engine simulation data demonstrates the precision and performance advantages of this proposed method, the precision of the method can reach to 98.9% and the average precision is 95.8%.

  1. Radiosurgical treatment planning for intracranial AVM based on images generated by principal component analysis. A simulation study

    International Nuclear Information System (INIS)

    Kawaguchi, Osamu; Kunieda, Etsuo; Nyui, Yoshiyuki

    2009-01-01

    One of the most important factors in stereotactic radiosurgery (SRS) for intracranial arteriovenous malformation (AVM) is to determine accurate target delineation of the nidus. However, since intracranial AVMs are complicated in structure, it is often difficult to clearly determine the target delineation. The purpose of this study was to investigate the usefulness of principal component analysis (PCA) on intra-arterial contrast enhanced dynamic CT (IADCT) images as a tool for delineating accurate target volumes for stereotactic radiosurgery of AVMs. IADCT and intravenous contrast-enhanced CT (IVCT) were used to examine 4 randomly selected cases of AVM. PCA images were generated from the IADCT data. The first component images were considered feeding artery predominant, the second component images were considered draining vein predominant, and the third component images were considered background. Target delineations were first carried out from IVCT, and then again while referring to the first and second components of the PCA images. Dose calculation simulations for radiosurgical treatment plans with IVCT and PCA images were performed. Dose volume histograms of the vein areas as well as the target volumes were compared. In all cases, the calculated target volumes based on IVCT images were larger than those based on PCA images, and the irradiation doses for the vein areas were reduced. In this study, we simulated radiosurgical treatment planning for intracranial AVM based on PCA images. By using PCA images, the irradiation doses for the vein areas were substantially reduced. (author)

  2. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    Science.gov (United States)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  3. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  4. Development of radiation hard components for remote maintenance

    International Nuclear Information System (INIS)

    Oka, Kiyoshi; Obara, Kenjiro; Kakudate, Satoshi; Tominaga, Ryuichiro; Akada, Tamio; Morita, Hirosuke.

    1997-01-01

    In International Thermonuclear Experimental Reactor (ITER), in-vessel remote-handling is inevitably required to assemble and maintain activated in-vessel components due to D-T operation. The components of the in-vessel remote-handling system must have sufficient radiation hardness to allow for operation under an intense gamma-ray radiation of over 30 kGy/h for periods up to more than 1,000 hours. To this end, extensive irradiation tests and quality improvements including the optimization of material composition have been conducted through the ITER R and D program in order to develop radiation hard components which satisfy radiation doses from 10 MGy to 100 MGy at the dose rate of 10 kGy/h. This paper outlines the latest status of the radiation hard component development that has been conducted as the Japan Home Team's contribution to ITER. The remote-handling components tested are categorized into either robotics, viewing systems or common components. The irradiation tests include commercial base products for screening both modified and newly developed products to improve their radiation hardness. (author)

  5. Principal component analysis identifies patterns of cytokine expression in non-small cell lung cancer patients undergoing definitive radiation therapy.

    Directory of Open Access Journals (Sweden)

    Susannah G Ellsworth

    Full Text Available Radiation treatment (RT stimulates the release of many immunohumoral factors, complicating the identification of clinically significant cytokine expression patterns. This study used principal component analysis (PCA to analyze cytokines in non-small cell lung cancer (NSCLC patients undergoing RT and explore differences in changes after hypofractionated stereotactic body radiation therapy (SBRT and conventionally fractionated RT (CFRT without or with chemotherapy.The dataset included 141 NSCLC patients treated on prospective clinical protocols; PCA was based on the 128 patients who had complete CK values at baseline and during treatment. Patients underwent SBRT (n = 16, CFRT (n = 18, or CFRT (n = 107 with concurrent chemotherapy (ChRT. Levels of 30 cytokines were measured from prospectively collected platelet-poor plasma samples at baseline, during RT, and after RT. PCA was used to study variations in cytokine levels in patients at each time point.Median patient age was 66, and 22.7% of patients were female. PCA showed that sCD40l, fractalkine/C3, IP10, VEGF, IL-1a, IL-10, and GMCSF were responsible for most variability in baseline cytokine levels. During treatment, sCD40l, IP10, MIP-1b, fractalkine, IFN-r, and VEGF accounted for most changes in cytokine levels. In SBRT patients, the most important players were sCD40l, IP10, and MIP-1b, whereas fractalkine exhibited greater variability in CFRT alone patients. ChRT patients exhibited variability in IFN-γ and VEGF in addition to IP10, MIP-1b, and sCD40l.PCA can identify potentially significant patterns of cytokine expression after fractionated RT. Our PCA showed that inflammatory cytokines dominate post-treatment cytokine profiles, and the changes differ after SBRT versus CFRT, with vs without chemotherapy. Further studies are planned to validate these findings and determine the clinical significance of the cytokine profiles identified by PCA.

  6. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  7. Principal Component and Cluster Analysis as a Tool in the Assessment of Tomato Hybrids and Cultivars

    Directory of Open Access Journals (Sweden)

    G. Evgenidis

    2011-01-01

    Full Text Available Determination of germplasm diversity and genetic relationships among breeding materials is an invaluable aid in crop improvement strategies. This study assessed the breeding value of tomato source material. Two commercial hybrids along with an experimental hybrid and four cultivars were assessed with cluster and principal component analyses based on morphophysiological data, yield and quality, stability of performance, heterosis, and combining abilities. The assessment of commercial hybrids revealed a related origin and subsequently does not support the identification of promising offspring in their crossing. The assessment of the cultivars discriminated them according to origin and evolutionary and selection effects. On the Principal Component 1, the largest group with positive loading included, yield components, heterosis, general and specific combining ability, whereas the largest negative loading was obtained by qualitative and descriptive traits. The Principal Component 2 revealed two smaller groups, a positive one with phenotypic traits and a negative one with tolerance to inbreeding. Stability of performance was loaded positively and/or negatively. In conclusion, combing ability, yield components, and heterosis provided a mechanism for ensuring continued improvement in plant selection programs.

  8. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  9. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  10. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  11. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  12. Fast grasping of unknown objects using principal component analysis

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  13. Morphological evaluation of common bean diversity in Bosnia and Herzegovina using the discriminant analysis of principal components (DAPC multivariate method

    Directory of Open Access Journals (Sweden)

    Grahić Jasmin

    2013-01-01

    Full Text Available In order to analyze morphological characteristics of locally cultivated common bean landraces from Bosnia and Herzegovina (B&H, thirteen quantitative and qualitative traits of 40 P. vulgaris accessions, collected from four geographical regions (Northwest B&H, Northeast B&H, Central B&H and Sarajevo and maintained at the Gene bank of the Faculty of Agriculture and Food Sciences in Sarajevo, were examined. Principal component analysis (PCA showed that the proportion of variance retained in the first two principal components was 54.35%. The first principal component had high contributing factor loadings from seed width, seed height and seed weight, whilst the second principal component had high contributing factor loadings from the analyzed traits seed per pod and pod length. PCA plot, based on the first two principal components, displayed a high level of variability among the analyzed material. The discriminant analysis of principal components (DAPC created 3 discriminant functions (DF, whereby the first two discriminant functions accounted for 90.4% of the variance retained. Based on the retained DFs, DAPC provided group membership probabilities which showed that 70% of the accessions examined were correctly classified between the geographically defined groups. Based on the taxonomic distance, 40 common bean accessions analyzed in this study formed two major clusters, whereas two accessions Acc304 and Acc307 didn’t group in any of those. Acc360 and Acc362, as well as Acc324 and Acc371 displayed a high level of similarity and are probably the same landrace. The present diversity of Bosnia and Herzegovina’s common been landraces could be useful in future breeding programs.

  14. Characterization of reflectance variability in the industrial paint application of automotive metallic coatings by using principal component analysis

    Science.gov (United States)

    Medina, José M.; Díaz, José A.

    2013-05-01

    We have applied principal component analysis to examine trial-to-trial variability of reflectances of automotive coatings that contain effect pigments. Reflectance databases were measured from different color batch productions using a multi-angle spectrophotometer. A method to classify the principal components was used based on the eigenvalue spectra. It was found that the eigenvalue spectra follow distinct power laws and depend on the detection angle. The scaling exponent provided an estimation of the correlation between reflectances and it was higher near specular reflection, suggesting a contribution from the deposition of effect pigments. Our findings indicate that principal component analysis can be a useful tool to classify different sources of spectral variability in color engineering.

  15. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    International Nuclear Information System (INIS)

    Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.

    2016-01-01

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information

  16. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    Energy Technology Data Exchange (ETDEWEB)

    Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  17. [Content of mineral elements of Gastrodia elata by principal components analysis].

    Science.gov (United States)

    Li, Jin-ling; Zhao, Zhi; Liu, Hong-chang; Luo, Chun-li; Huang, Ming-jin; Luo, Fu-lai; Wang, Hua-lei

    2015-03-01

    To study the content of mineral elements and the principal components in Gastrodia elata. Mineral elements were determined by ICP and the data was analyzed by SPSS. K element has the highest content-and the average content was 15.31 g x kg(-1). The average content of N element was 8.99 g x kg(-1), followed by K element. The coefficient of variation of K and N was small, but the Mn was the biggest with 51.39%. The highly significant positive correlation was found among N, P and K . Three principal components were selected by principal components analysis to evaluate the quality of G. elata. P, B, N, K, Cu, Mn, Fe and Mg were the characteristic elements of G. elata. The content of K and N elements was higher and relatively stable. The variation of Mn content was biggest. The quality of G. elata in Guizhou and Yunnan was better from the perspective of mineral elements.

  18. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    Science.gov (United States)

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  19. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    Science.gov (United States)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  20. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  1. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  2. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Directory of Open Access Journals (Sweden)

    Chen Shih-Wei

    2011-11-01

    Full Text Available Abstract Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD. In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA. Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup

  3. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  4. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Zhang YJ

    2018-05-01

    Full Text Available Yuji Zhang,* Xiaoju Li,* Lu Mao, Mei Zhang, Ke Li, Yinxia Zheng, Wangfei Cui, Hongpo Yin, Yanli He, Mingxia Jing Department of Public Health, Shihezi University School of Medicine, Shihezi, Xinjiang, China *These authors contributed equally to this work Purpose: The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis.Patients and methods: A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ2-test and a binary logistic regression model.Results: This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications.Conclusion: Community management plays an important role in improving the patients’ medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers. Keywords: hypertension, medication adherence, factors, principal component analysis, community management, China

  5. Principal component analysis of NEXAFS spectra for molybdenum speciation in hydrotreating catalysts

    International Nuclear Information System (INIS)

    Faro Junior, Arnaldo da C.; Rodrigues, Victor de O.; Eon, Jean-G.; Rocha, Angela S.

    2010-01-01

    Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L III and L II edges. The techniques of principal component analysis (PCA) together with a linear combination analysis (LCA) allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination. (author)

  6. Principal semantic components of language and the measurement of meaning.

    Science.gov (United States)

    Samsonovich, Alexei V; Samsonovic, Alexei V; Ascoli, Giorgio A

    2010-06-11

    Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence), "calm/excited" (arousal), and "open/closed" (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a

  7. Principal semantic components of language and the measurement of meaning.

    Directory of Open Access Journals (Sweden)

    Alexei V Samsonovich

    Full Text Available Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence, "calm/excited" (arousal, and "open/closed" (freedom, respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4 of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet, among Western languages (English, French, German, and Spanish, and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step

  8. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  9. Chemical fingerprinting of terpanes and steranes by chromatographic alignment and principal component analysis

    International Nuclear Information System (INIS)

    Christensen, J.H.; Hansen, A.B.; Andersen, O.

    2005-01-01

    Biomarkers such as steranes and terpanes are abundant in crude oils, particularly in heavy distillate petroleum products. They are useful for matching highly weathered oil samples when other groups of petroleum hydrocarbons fail to distinguish oil samples. In this study, time warping and principal component analysis (PCA) were applied for oil hydrocarbon fingerprinting based on relative amounts of terpane and sterane isomers analyzed by gas chromatography and mass spectrometry. The 4 principal components were boiling point range, clay content, marine or organic terrestrial matter, and maturity based on differences in the terpane and sterane isomer patterns. This study is an extension of a previous fingerprinting study for identifying the sources of oil spill samples based only on the profiles of sterane isomers. Spill samples from the Baltic Carrier oil spill were correctly identified by inspection of score plots. The interpretation of the loading and score plots offered further chemical information about correlations between changes in the amounts of sterane and terpane isomers. It was concluded that this method is an objective procedure for analyzing chromatograms with more comprehensive data usage compared to other fingerprinting methods. 20 refs., 4 figs

  10. Chemical fingerprinting of terpanes and steranes by chromatographic alignment and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, J.H. [Royal Veterinary and Agricultural Univ., Thorvaldsensvej (Denmark). Dept. of Natural Sciences; Hansen, A.B. [National Environmental Research Inst., Roskilde (Denmark). Dept. of Environmental Chemistry and Microbiology; Andersen, O. [Roskilde Univ., Roskilde (Denmark). Dept. of Life Sciences and Chemistry

    2005-07-01

    Biomarkers such as steranes and terpanes are abundant in crude oils, particularly in heavy distillate petroleum products. They are useful for matching highly weathered oil samples when other groups of petroleum hydrocarbons fail to distinguish oil samples. In this study, time warping and principal component analysis (PCA) were applied for oil hydrocarbon fingerprinting based on relative amounts of terpane and sterane isomers analyzed by gas chromatography and mass spectrometry. The 4 principal components were boiling point range, clay content, marine or organic terrestrial matter, and maturity based on differences in the terpane and sterane isomer patterns. This study is an extension of a previous fingerprinting study for identifying the sources of oil spill samples based only on the profiles of sterane isomers. Spill samples from the Baltic Carrier oil spill were correctly identified by inspection of score plots. The interpretation of the loading and score plots offered further chemical information about correlations between changes in the amounts of sterane and terpane isomers. It was concluded that this method is an objective procedure for analyzing chromatograms with more comprehensive data usage compared to other fingerprinting methods. 20 refs., 4 figs.

  11. The Purification Method of Matching Points Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    DONG Yang

    2017-02-01

    Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.

  12. 2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.

    Science.gov (United States)

    Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen

    2017-09-19

    A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.

  13. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  14. Local Prediction Models on Mid-Atlantic Ridge MORB by Principal Component Regression

    Science.gov (United States)

    Ling, X.; Snow, J. E.; Chin, W.

    2017-12-01

    The isotopic compositions of the daughter isotopes of long-lived radioactive systems (Sr, Nd, Hf and Pb ) can be used to map the scale and history of mantle heterogeneities beneath mid-ocean ridges. Our goal is to relate the multidimensional structure in the existing isotopic dataset with an underlying physical reality of mantle sources. The numerical technique of Principal Component Analysis is useful to reduce the linear dependence of the data to a minimum set of orthogonal eigenvectors encapsulating the information contained (cf Agranier et al 2005). The dataset used for this study covers almost all the MORBs along mid-Atlantic Ridge (MAR), from 54oS to 77oN and 8.8oW to -46.7oW, including replicating the dataset of Agranier et al., 2005 published plus 53 basalt samples dredged and analyzed since then (data from PetDB). The principal components PC1 and PC2 account for 61.56% and 29.21%, respectively, of the total isotope ratios variability. The samples with similar compositions to HIMU and EM and DM are identified to better understand the PCs. PC1 and PC2 are accountable for HIMU and EM whereas PC2 has limited control over the DM source. PC3 is more strongly controlled by the depleted mantle source than PC2. What this means is that all three principal components have a high degree of significance relevant to the established mantle sources. We also tested the relationship between mantle heterogeneity and sample locality. K-means clustering algorithm is a type of unsupervised learning to find groups in the data based on feature similarity. The PC factor scores of each sample are clustered into three groups. Cluster one and three are alternating on the north and south MAR. Cluster two exhibits on 45.18oN to 0.79oN and -27.9oW to -30.40oW alternating with cluster one. The ridge has been preliminarily divided into 16 sections considering both the clusters and ridge segments. The principal component regression models the section based on 6 isotope ratios and PCs. The

  15. Vibrational spectroscopy and principal component analysis for conformational study of virus nucleic acids

    Science.gov (United States)

    Dovbeshko, G. I.; Repnytska, O. P.; Pererva, T.; Miruta, A.; Kosenkov, D.

    2004-07-01

    Conformation analysis of mutated DNA-bacteriophages (PLys-23, P23-2, P47- the numbers have been assigned by T. Pererva) induced by MS2 virus incorporated in Ecoli AB 259 Hfr 3000 has been done. Surface enhanced infrared absorption (SEIRA) spectroscopy and principal component analysis has been applied for solving this problem. The nucleic acids isolated from the mutated phages had a form of double stranded DNA with different modifications. The nucleic acid from phage P47 was undergone the structural rearrangement in the most degree. The shape and position ofthe fine structure of the Phosphate asymmetrical band at 1071cm-1 as well as the stretching OH vibration at 3370-3390 cm-1 has indicated to the appearance ofadditional OH-groups. The Z-form feature has been found in the base vibration region (1694 cm-1) and the sugar region (932 cm-1). A supposition about modification of structure of DNA by Z-fragments for P47 phage has been proposed. The P23-2 and PLys-23 phages have showed the numerous minor structural changes also. On the basis of SEIRA spectra we have determined the characteristic parameters of the marker bands of nucleic acid used for construction of principal components. Contribution of different spectral parameters of nucleic acids to principal components has been estimated.

  16. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  17. Detecting Genomic Signatures of Natural Selection with Principal Component Analysis: Application to the 1000 Genomes Data.

    Science.gov (United States)

    Duforet-Frebourg, Nicolas; Luu, Keurcien; Laval, Guillaume; Bazin, Eric; Blum, Michael G B

    2016-04-01

    To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis (PCA). We show that the common FST index of genetic differentiation between populations can be viewed as the proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) considering 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3×). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and noncoding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt fast open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  18. Principal Component Surface (2011) for Fish Bay, St. John

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This image represents a 0.3x0.3 meter principal component analysis (PCA) surface for areas inside Fish Bay, St. John in the U.S. Virgin Islands (USVI). It was...

  19. Principal Component Surface (2011) for Coral Bay, St. John

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This image represents a 0.3x0.3 meter principal component analysis (PCA) surface for areas inside Coral Bay, St. John in the U.S. Virgin Islands (USVI). It was...

  20. Assessment of oil weathering by gas chromatography-mass spectrometry, time warping and principal component analysis

    DEFF Research Database (Denmark)

    Malmquist, Linus M.V.; Olsen, Rasmus R.; Hansen, Asger B.

    2007-01-01

    weathering state and to distinguish between various weathering processes is investigated and discussed. The method is based on comprehensive and objective chromatographic data processing followed by principal component analysis (PCA) of concatenated sections of gas chromatography–mass spectrometry...

  1. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  2. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  3. Collagen-based proteinaceous binder-pigment interaction study under UV ageing conditions by MALDI-TOF-MS and principal component analysis.

    Science.gov (United States)

    Romero-Pastor, Julia; Navas, Natalia; Kuckova, Stepanka; Rodríguez-Navarro, Alejandro; Cardell, Carolina

    2012-03-01

    This study focuses on acquiring information on the degradation process of proteinaceous binders due to ultra violet (UV) radiation and possible interactions owing to the presence of historical mineral pigments. With this aim, three different paint model samples were prepared according to medieval recipes, using rabbit glue as proteinaceus binders. One of these model samples contained only the binder, and the other two were prepared by mixing each of the pigments (cinnabar or azurite) with the binder (glue tempera model samples). The model samples were studied by applying Principal Component Analysis (PCA) to their mass spectra obtained with Matrix-Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF-MS). The complementary use of Fourier Transform Infrared Spectroscopy to study conformational changes of secondary structure of the proteinaceous binder is also proposed. Ageing effects on the model samples after up to 3000 h of UV irradiation were periodically analyzed by the proposed approach. PCA on MS data proved capable of identifying significant changes in the model samples, and the results suggested different aging behavior based on the pigment present. This research represents the first attempt to use this approach (PCA on MALDI-TOF-MS data) in the field of Cultural Heritage and demonstrates the potential benefits in the study of proteinaceous artistic materials for purposes of conservation and restoration. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    Science.gov (United States)

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  5. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  6. Development of a noise reduction program of a prompt gamma spectrum based on principal component analysis for an explosive detection

    International Nuclear Information System (INIS)

    Lee, Yun Hee; Im, Hee Jung; Song, Byung ChoI; Park, Yong Joon; Kim, Won Ho; Cho, Jung Hwan

    2005-01-01

    This work demonstrates a developed program to reduce noises of a prompt gamma-ray spectrum measured by irradiating neutrons into baggage. The noises refer to random variations mainly caused by electrical fluctuations and also by a measurement time. Especially, since the short measurement time yields such a noisy spectrum in which its special peak can not be observed, it is necessary to extract its characteristic signals from the spectrum to identify an explosive hidden in luggage. Principal component analysis(PCA) that is a multivariate statistical technique is closely related to singular value decomposition(SVD). The SVD-based PCA decreases the noise by reconstructing the spectrum after determining the number of principal components corresponding important signals based on the history data that sufficiently describe its population. In this study, we present a visualized program of the above procedure using the MATLAB 7.04 programming language. When our program is started, it requires an arbitrary measured spectrum to be reduced and history spectra as input files. If user selects the files with menu, our program automatically carries out the PCA procedure and provides its noise-reduced spectrum plot as well as the original spectrum plot into an output window. In addition, user can obtain signal-to-noise ratio of an interesting peak by defining the peak and noise ranges with menu

  7. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  8. Principal component structure and sport-specific differences in the running one-leg vertical jump.

    Science.gov (United States)

    Laffaye, G; Bardy, B G; Durey, A

    2007-05-01

    The aim of this study is to identify the kinetic principal components involved in one-leg running vertical jumps, as well as the potential differences between specialists from different sports. The sample was composed of 25 regional skilled athletes who play different jumping sports (volleyball players, handball players, basketball players, high jumpers and novices), who performed a running one-leg jump. A principal component analysis was performed on the data obtained from the 200 tested jumps in order to identify the principal components summarizing the six variables extracted from the force-time curve. Two principal components including six variables accounted for 78 % of the variance in jump height. Running one-leg vertical jump performance was predicted by a temporal component (that brings together impulse time, eccentric time and vertical displacement of the center of mass) and a force component (who brings together relative peak of force and power, and rate of force development). A comparison made among athletes revealed a temporal-prevailing profile for volleyball players, and a force-dominant profile for Fosbury high jumpers. Novices showed an ineffective utilization of the force component, while handball and basketball players showed heterogeneous and neutral component profiles. Participants will use a jumping strategy in which variables related to either the magnitude or timing of force production will be closely coupled; athletes from different sporting backgrounds will use a jumping strategy that reflects the inherent demands of their chosen sport.

  9. Water quality of the Chhoti Gandak River using principal component ...

    Indian Academy of Sciences (India)

    ; therefore water samples were collected to analyse its quality along the entire length of Chhoti Gandak. River. The principal components of water quality are controlled by lithology, gentle slope gradient, poor drainage, long residence of water, ...

  10. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    Science.gov (United States)

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  11. The analysis of multivariate group differences using common principal components

    NARCIS (Netherlands)

    Bechger, T.M.; Blanca, M.J.; Maris, G.

    2014-01-01

    Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences

  12. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  13. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    Science.gov (United States)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  14. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  15. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  16. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  17. Principal Component Analysis of Some Quantitative and Qualitative Traits in Iranian Spinach Landraces

    Directory of Open Access Journals (Sweden)

    Mohebodini Mehdi

    2017-08-01

    Full Text Available Landraces of spinach in Iran have not been sufficiently characterised for their morpho-agronomic traits. Such characterisation would be helpful in the development of new genetically improved cultivars. In this study 54 spinach accessions collected from the major spinach growing areas of Iran were evaluated to determine their phenotypic diversity profile of spinach genotypes on the basis of 10 quantitative and 9 qualitative morpho-agronomic traits. High coefficients of variation were recorded in some quantitative traits (dry yield and leaf area and all of the qualitative traits. Using principal component analysis, the first four principal components with eigen-values more than 1 contributed 87% of the variability among accessions for quantitative traits, whereas the first four principal components with eigen-values more than 0.8 contributed 79% of the variability among accessions for qualitative traits. The most important relations observed on the first two principal components were a strong positive association between leaf width and petiole length; between leaf length and leaf numbers in flowering; and among fresh yield, dry yield and petiole diameter; a near zero correlation between days to flowering with leaf width and petiole length. Prickly seeds, high percentage of female plants, smooth leaf texture, high numbers of leaves at flowering, greygreen leaves, erect petiole attitude and long petiole length are important characters for spinach breeding programmes.

  18. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  19. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    Science.gov (United States)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  20. Radiation induced defect flux behaviors at zirconium based component

    International Nuclear Information System (INIS)

    Choi, Sang Il; Kim, Ji Hyun; Kwon, Jun Hyun; Lee, Gyeong Geun

    2013-01-01

    In commercial reactor core, structure materials are located in high temperature and high pressure environment. Therefore, main concern of structure materials is corrosion and mechanical properties change than radiation effects on materials. However, radiation effects on materials become more important phenomena because research reactor condition is different from commercial reactor. The temperature is lower than 100 .deg. C and radiation dose is much higher than that of commercial reactor. Among the radiation effect on zirconium based metal, radiation induced growth (RIG), known as volume conservative distortion, is one of the most important phenomena. Recently, theoretical RIG modeling based on radiation damage theory (RDT) and balance equation are developed. However, these growth modeling have limited framework of single crystal and high temperature. To model theoretical RIG in research reactor, qualitative mechanism must be set up. Therefore, this paper intent is establishing defect flux mechanism of zirconium base metal in research reactor for RIG modeling. After than theoretical RIG work will be expanded to research reactor condition

  1. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  2. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  3. Characterization of soil chemical properties of strawberry fields using principal component analysis

    Directory of Open Access Journals (Sweden)

    Gláucia Oliveira Islabão

    2013-02-01

    Full Text Available One of the largest strawberry-producing municipalities of Rio Grande do Sul (RS is Turuçu, in the South of the State. The strawberry production system adopted by farmers is similar to that used in other regions in Brazil and in the world. The main difference is related to the soil management, which can change the soil chemical properties during the strawberry cycle. This study had the objective of assessing the spatial and temporal distribution of soil fertility parameters using principal component analysis (PCA. Soil sampling was based on topography, dividing the field in three thirds: upper, middle and lower. From each of these thirds, five soil samples were randomly collected in the 0-0.20 m layer, to form a composite sample for each third. Four samples were taken during the strawberry cycle and the following properties were determined: soil organic matter (OM, soil total nitrogen (N, available phosphorus (P and potassium (K, exchangeable calcium (Ca and magnesium (Mg, soil pH (pH, cation exchange capacity (CEC at pH 7.0, soil base (V% and soil aluminum saturation(m%. No spatial variation was observed for any of the studied soil fertility parameters in the strawberry fields and temporal variation was only detected for available K. Phosphorus and K contents were always high or very high from the beginning of the strawberry cycle, while pH values ranged from very low to very high. Principal component analysis allowed the clustering of all strawberry fields based on variables related to soil acidity and organic matter content.

  4. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China.

    Science.gov (United States)

    Zhang, Yuji; Li, Xiaoju; Mao, Lu; Zhang, Mei; Li, Ke; Zheng, Yinxia; Cui, Wangfei; Yin, Hongpo; He, Yanli; Jing, Mingxia

    2018-01-01

    The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis. A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ 2 -test and a binary logistic regression model. This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications. Community management plays an important role in improving the patients' medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers.

  5. Multi-Trait analysis of growth traits: fitting reduced rank models using principal components for Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Rodrigo Reis Mota

    2016-09-01

    Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.

  6. The selection of radiation tolerant electrical/electronic components for gamma radiation environments in the nuclear power industry

    International Nuclear Information System (INIS)

    Garlick, D.R.

    1984-09-01

    This report briefly describes the mechanisms, units and effects of 1 MeV range gamma radiation on electrical/electronic components and materials. Information is tabulated on the gamma radiation tolerance of a wide range of components and materials. A radiation testing service, based at Harwell, is described. Lists of interested manufacturers and organisations are given. (author)

  7. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  8. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  9. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  10. Ecological Safety Evaluation of Land Use in Ji’an City Based on the Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    According to the ecological safety evaluation index data of land-use change in Ji’an City from 1999 to 2008,positive treatment on selected reverse indices is conducted by Reciprocal Method.Meanwhile,Index Method is used to standardize the selected indices,and Principal Component Analysis is applied by using year as a unit.FB is obtained,which is related with the ecological safety of land-use change from 1999 to 2008.According to the scientific,integrative,hierarchical,practical and dynamic principles,ecological safety evaluation index system of land-use change in Ji’an City is established.Principal Component Analysis and evaluation model are used to calculate four parameters,including the natural resources safety index of land use,the socio-economic safety indicators of land use,the eco-environmental safety index of land use,and the ecological safety degree of land use in Ji’an City.Result indicates that the ecological safety degree of land use in Ji’an City shows a slow upward trend as a whole.At the same time,ecological safety degree of land-use change is relatively low in Ji’an City with the safety value of 0.645,which is at a weak safety zone and needs further monitoring and maintenance.

  11. Adaptive operational modal identification for slow linear time-varying structures based on frozen-in coefficient method and limited memory recursive principal component analysis

    Science.gov (United States)

    Wang, Cheng; Guan, Wei; Wang, J. Y.; Zhong, Bineng; Lai, Xiongming; Chen, Yewang; Xiang, Liang

    2018-02-01

    To adaptively identify the transient modal parameters for linear weakly damped structures with slow time-varying characteristics under unmeasured stationary random ambient loads, this paper proposes a novel operational modal analysis (OMA) method based on the frozen-in coefficient method and limited memory recursive principal component analysis (LMRPCA). In the modal coordinate, the random vibration response signals of mechanical weakly damped structures can be decomposed into the inner product of modal shapes and modal responses, from which the natural frequencies and damping ratios can be well acquired by single-degree-of-freedom (SDOF) identification approach such as FFT. Hence, for the OMA method based on principal component analysis (PCA), it becomes very crucial to examine the relation between the transformational matrix and the modal shapes matrix, to find the association between the principal components (PCs) matrix and the modal responses matrix, and to turn the operational modal parameter identification problem into PCA of the stationary random vibration response signals of weakly damped mechanical structures. Based on the theory of "time-freezing", the method of frozen-in coefficient, and the assumption of "short time invariant" and "quasistationary", the non-stationary random response signals of the weakly damped and slow linear time-varying structures (LTV) can approximately be seen as the stationary random response time series of weakly damped and linear time invariant structures (LTI) in a short interval. Thus, the adaptive identification of time-varying operational modal parameters is turned into decompositing the PCs of stationary random vibration response signals subsection of weakly damped mechanical structures after choosing an appropriate limited memory window. Finally, a three-degree-of-freedom (DOF) structure with weakly damped and slow time-varying mass is presented to illustrate this method of identification. Results show that the LMRPCA

  12. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  13. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    Science.gov (United States)

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  14. Dynamic of consumer groups and response of commodity markets by principal component analysis

    Science.gov (United States)

    Nobi, Ashadun; Alam, Shafiqul; Lee, Jae Woo

    2017-09-01

    This study investigates financial states and group dynamics by applying principal component analysis to the cross-correlation coefficients of the daily returns of commodity futures. The eigenvalues of the cross-correlation matrix in the 6-month timeframe displays similar values during 2010-2011, but decline following 2012. A sharp drop in eigenvalue implies the significant change of the market state. Three commodity sectors, energy, metals and agriculture, are projected into two dimensional spaces consisting of two principal components (PC). We observe that they form three distinct clusters in relation to various sectors. However, commodities with distinct features have intermingled with one another and scattered during severe crises, such as the European sovereign debt crises. We observe the notable change of the position of two dimensional spaces of groups during financial crises. By considering the first principal component (PC1) within the 6-month moving timeframe, we observe that commodities of the same group change states in a similar pattern, and the change of states of one group can be used as a warning for other group.

  15. A novel principal component analysis for spatially misaligned multivariate air pollution data.

    Science.gov (United States)

    Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A

    2017-01-01

    We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.

  16. Evaluation of functional scintigraphy of gastric emptying by the principal component method

    Energy Technology Data Exchange (ETDEWEB)

    Haeussler, M.; Eilles, C.; Reiners, C.; Moll, E.; Boerner, W.

    1980-10-01

    Gastric emptying of a standard semifluid test-meal, labeled with /sup 99/sup(m)Tc-DTPA, was studied by functional scintigraphy in 88 subjects (normals, patients with duodenal and gastric ulcer before and after selective proximal vagotomy with and without pyloroplasty). Gastric emptying curves were analysed by the method of principal components. Patients after selective proximal vagotomy with pyloroplasty showed an rapid initial emptying, whereas this was a rare finding in patients after selective proximal vagotomy without pyloroplasty. The method of principal components is well suited for mathematical analysis of gastric emptying; nevertheless the results are difficult to interpret. The method has advantages when looking at larger collectives and allows a separation into groups with different gastric emptying.

  17. Obesity, metabolic syndrome, impaired fasting glucose, and microvascular dysfunction: a principal component analysis approach.

    Science.gov (United States)

    Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G

    2012-11-13

    We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular

  18. Obesity, metabolic syndrome, impaired fasting glucose, and microvascular dysfunction: a principal component analysis approach

    Directory of Open Access Journals (Sweden)

    Panazzolo Diogo G

    2012-11-01

    Full Text Available Abstract Background We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Methods Data from 189 female subjects (34.0±15.5 years, 30.5±7.1 kg/m2, who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA. PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI, waist circumference, systolic and diastolic blood pressure (BP, fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c, low-density lipoprotein cholesterol (LDL-c, triglycerides (TG, insulin, C-reactive protein (CRP, and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV at rest and peak after 1 min of arterial occlusion (RBCVmax, and the time taken to reach RBCVmax (TRBCVmax. Results A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCVmax varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCVmax, but in the opposite way. Principal component 3 was

  19. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  20. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    Science.gov (United States)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity

  1. Measuring yield performance of upland cotton varieties using adaptability, stability and principal component analyses

    International Nuclear Information System (INIS)

    Baloch, M.J.

    2003-01-01

    Nine upland cotton varieties/strains were tested over 36 environments in Pakistan so as to determine their stability in yield performance. The regression coefficient (b) was used as a measure of adaptability, whereas parameters such as coefficient of determination (r2) and sum of squared deviations from regression (s/sup 2/d) were used as measure of stability. Although the regression coefficients (b) of all varieties did not deviate significantly from the unit slope, the varieties CRIS-5A. BII-89, DNH-40 and Rehmani gave b value closer to unity implying their better adaptation. Lower s/sub 2/d and higher r/sub 2/ of CRIS- 121 and DNH-40 suggest that both of these are fairly stable. The results indicate that, generally, adaptability and stability parameters are independent of each in as much as not all of the parameters simultaneously favoured one variety over the other excepting the variety DNH-40, which was stable based on majority of the parameters. Principal component analysis revealed that the first two components (latent roots) account for about 91.4% of the total variation. The latent vectors of first principal component (PCA1) were smaller and positive which also suggest that most of the varieties were quite adaptive to all of the test environments. (author)

  2. [Determination of the Plant Origin of Licorice Oil Extract, a Natural Food Additive, by Principal Component Analysis Based on Chemical Components].

    Science.gov (United States)

    Tada, Atsuko; Ishizuki, Kyoko; Sugimoto, Naoki; Yoshimatsu, Kayo; Kawahara, Nobuo; Suematsu, Takako; Arifuku, Kazunori; Fukai, Toshio; Tamura, Yukiyoshi; Ohtsuki, Takashi; Tahara, Maiko; Yamazaki, Takeshi; Akiyama, Hiroshi

    2015-01-01

    "Licorice oil extract" (LOE) (antioxidant agent) is described in the notice of Japanese food additive regulations as a material obtained from the roots and/or rhizomes of Glycyrrhiza uralensis, G. inflata or G. glabra. In this study, we aimed to identify the original Glycyrrhiza species of eight food additive products using LC/MS. Glabridin, a characteristic compound in G. glabra, was specifically detected in seven products, and licochalcone A, a characteristic compound in G. inflata, was detected in one product. In addition, Principal Component Analysis (PCA) (a kind of multivariate analysis) using the data of LC/MS or (1)H-NMR analysis was performed. The data of thirty-one samples, including LOE products used as food additives, ethanol extracts of various Glycyrrhiza species and commercially available Glycyrrhiza species-derived products were assessed. Based on the PCA results, the majority of LOE products was confirmed to be derived from G. glabra. This study suggests that PCA using (1)H-NMR analysis data is a simple and useful method to identify the plant species of origin of natural food additive products.

  3. Radiation effects in optical components

    International Nuclear Information System (INIS)

    Friebele, E.J.

    1987-01-01

    This report discusses components of high performance optical devices may be exposed to high energy radiation environments during their lifetime. The effect of these adverse environments depends upon a large number of parameters associated with the radiation (nature, energy, dose, dose rate, etc.) or the system (temperature, optical performance requirements, optical wavelength, optical power, path length, etc.), as well as the intrinsic susceptibility of the optical component itself to degradation

  4. Residual lifetime prediction for lithium-ion battery based on functional principal component analysis and Bayesian approach

    International Nuclear Information System (INIS)

    Cheng, Yujie; Lu, Chen; Li, Tieying; Tao, Laifa

    2015-01-01

    Existing methods for predicting lithium-ion (Li-ion) battery residual lifetime mostly depend on a priori knowledge on aging mechanism, the use of chemical or physical formulation and analytical battery models. This dependence is usually difficult to determine in practice, which restricts the application of these methods. In this study, we propose a new prediction method for Li-ion battery residual lifetime evaluation based on FPCA (functional principal component analysis) and Bayesian approach. The proposed method utilizes FPCA to construct a nonparametric degradation model for Li-ion battery, based on which the residual lifetime and the corresponding confidence interval can be evaluated. Furthermore, an empirical Bayes approach is utilized to achieve real-time updating of the degradation model and concurrently determine residual lifetime distribution. Based on Bayesian updating, a more accurate prediction result and a more precise confidence interval are obtained. Experiments are implemented based on data provided by the NASA Ames Prognostics Center of Excellence. Results confirm that the proposed prediction method performs well in real-time battery residual lifetime prediction. - Highlights: • Capacity is considered functional and FPCA is utilized to extract more information. • No features required which avoids drawbacks induced by feature extraction. • A good combination of both population and individual information. • Avoiding complex aging mechanism and accurate analytical models of batteries. • Easily applicable to different batteries for life prediction and RLD calculation.

  5. Quality analysis of commercial samples of Ziziphi spinosae semen (suanzaoren by means of chromatographic fingerprinting assisted by principal component analysis

    Directory of Open Access Journals (Sweden)

    Shuai Sun

    2014-06-01

    Full Text Available Due to the scarcity of resources of Ziziphi spinosae semen (ZSS, many inferior goods and even adulterants are generally found in medicine markets. To strengthen the quality control, HPLC fingerprint common pattern established in this paper showed three main bioactive compounds in one chromatogram simultaneously. Principal component analysis based on DAD signals could discriminate adulterants and inferiorities. Principal component analysis indicated that all samples could be mainly regrouped into two main clusters according to the first principal component (PC1, redefined as Vicenin II and the second principal component (PC2, redefined as zizyphusine. PC1 and PC2 could explain 91.42% of the variance. Content of zizyphusine fluctuated more greatly than that of spinosin, and this result was also confirmed by the HPTLC result. Samples with low content of jujubosides and two common adulterants could not be used equivalently with authenticated ones in clinic, while one reference standard extract could substitute the crude drug in pharmaceutical production. Giving special consideration to the well-known bioactive saponins but with low response by end absorption, a fast and cheap HPTLC method for quality control of ZSS was developed and the result obtained was commensurate well with that of HPLC analysis. Samples having similar fingerprints to HPTLC common pattern targeting at saponins could be regarded as authenticated ones. This work provided a faster and cheaper way for quality control of ZSS and laid foundation for establishing a more effective quality control method for ZSS. Keywords: Adulterant, Common pattern, Principal component analysis, Quality control, Ziziphi spinosae semen

  6. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  7. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    Science.gov (United States)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  8. Fluoride in the Serra Geral Aquifer System: Source Evaluation Using Stable Isotopes and Principal Component Analysis

    OpenAIRE

    Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando

    2013-01-01

    Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...

  9. Resource Loss and Depressive Symptoms Following Hurricane Katrina: A Principal Component Regression Study

    OpenAIRE

    Liang L; Hayashi K; Bennett P; Johnson T. J; Aten J. D

    2015-01-01

    To understand the relationship between the structure of resource loss and depression after disaster exposure, the components of resource loss and the impact of these resource loss components on depression was examined among college students (N=654) at two universities who were affected by Hurricane Katrina. The component of resource loss was analyzed by principal component analysis first. Gender, social relationship loss, and financial loss were then examined with the regression model on depr...

  10. Equity in health care in Namibia: developing a needs-based resource allocation formula using principal components analysis

    Directory of Open Access Journals (Sweden)

    Mutirua Kauto

    2007-03-01

    Full Text Available Abstract Background The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Methods Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Results Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. Conclusion To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care.

  11. Equity in health care in Namibia: developing a needs-based resource allocation formula using principal components analysis.

    Science.gov (United States)

    Zere, Eyob; Mandlhate, Custodia; Mbeeli, Thomas; Shangula, Kalumbi; Mutirua, Kauto; Kapenambili, William

    2007-03-29

    The pace of redressing inequities in the distribution of scarce health care resources in Namibia has been slow. This is due primarily to adherence to the historical incrementalist type of budgeting that has been used to allocate resources. Those regions with high levels of deprivation and relatively greater need for health care resources have been getting less than their fair share. To rectify this situation, which was inherited from the apartheid system, there is a need to develop a needs-based resource allocation mechanism. Principal components analysis was employed to compute asset indices from asset based and health-related variables, using data from the Namibia demographic and health survey of 2000. The asset indices then formed the basis of proposals for regional weights for establishing a needs-based resource allocation formula. Comparing the current allocations of public sector health car resources with estimates using a needs based formula showed that regions with higher levels of need currently receive fewer resources than do regions with lower need. To address the prevailing inequities in resource allocation, the Ministry of Health and Social Services should abandon the historical incrementalist method of budgeting/resource allocation and adopt a more appropriate allocation mechanism that incorporates measures of need for health care.

  12. Principal component analysis for the forensic discrimination of black inkjet inks based on the Vis-NIR fibre optics reflection spectra.

    Science.gov (United States)

    Gál, Lukáš; Oravec, Michal; Gemeiner, Pavol; Čeppan, Michal

    2015-12-01

    Nineteen black inkjet inks of six different brands were examined by fibre optics reflection spectroscopy in Visible and Near Infrared Region (Vis-NIR FORS) directly on paper with a view to achieving good resolution between them. These different inks were tested on nineteen different inkjet printers from three brands. Samples were obtained from prints by reflection probe. Processed reflection spectra in the range 500-1000 nm were used as samples in principal component analysis. Variability between spectra of the same ink obtained from different prints, as well as between spectra of square areas and lines was examined. For both spectra obtained from square areas and lines reference, Principal Component Analysis (PCA) models were created. According to these models, the inkjet inks were divided into clusters. PCA method is able to separate inks containing carbon black as main colorant from the other inks using other colorants. Some spectra were recorded from another piece of printer and used as validation samples. Spectra of validation samples were projected onto reference PCA models. According to position of validation samples in score plots it can be concluded that PCA based on Vis-NIR FORS can reliably differentiate inkjet inks which are included in the reference database. The presented method appears to be a suitable tool for forensic examination of questioned documents containing inkjet inks. Inkjet inks spectra were obtained without extraction or cutting sample with possibility to measure out of the laboratory. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Detecting 3-D rotational motion and extracting target information from the principal component analysis of scatterer range histories

    CSIR Research Space (South Africa)

    Nel, W

    2009-10-01

    Full Text Available to estimate the 3-D position of scatterers as a by-product of the analysis. The technique is based on principal component analysis of accurate scatterer range histories and is shown only in simulation. Future research should focus on practical application....

  14. Artificial neural network combined with principal component analysis for resolution of complex pharmaceutical formulations.

    Science.gov (United States)

    Ioele, Giuseppina; De Luca, Michele; Dinç, Erdal; Oliverio, Filomena; Ragno, Gaetano

    2011-01-01

    A chemometric approach based on the combined use of the principal component analysis (PCA) and artificial neural network (ANN) was developed for the multicomponent determination of caffeine (CAF), mepyramine (MEP), phenylpropanolamine (PPA) and pheniramine (PNA) in their pharmaceutical preparations without any chemical separation. The predictive ability of the ANN method was compared with the classical linear regression method Partial Least Squares 2 (PLS2). The UV spectral data between 220 and 300 nm of a training set of sixteen quaternary mixtures were processed by PCA to reduce the dimensions of input data and eliminate the noise coming from instrumentation. Several spectral ranges and different numbers of principal components (PCs) were tested to find the PCA-ANN and PLS2 models reaching the best determination results. A two layer ANN, using the first four PCs, was used with log-sigmoid transfer function in first hidden layer and linear transfer function in output layer. Standard error of prediction (SEP) was adopted to assess the predictive accuracy of the models when subjected to external validation. PCA-ANN showed better prediction ability in the determination of PPA and PNA in synthetic samples with added excipients and pharmaceutical formulations. Since both components are characterized by low absorptivity, the better performance of PCA-ANN was ascribed to the ability in considering all non-linear information from noise or interfering excipients.

  15. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    Science.gov (United States)

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species

  16. Fall detection in walking robots by multi-way principal component analysis

    NARCIS (Netherlands)

    Karssen, J.G.; Wisse, M.

    2008-01-01

    Large disturbances can cause a biped to fall. If an upcoming fall can be detected, damage can be minimized or the fall can be prevented. We introduce the multi-way principal component analysis (MPCA) method for the detection of upcoming falls. We study the detection capability of the MPCA method in

  17. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  18. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  19. Radiation effects on eye components

    Science.gov (United States)

    Durchschlag, H.; Fochler, C.; Abraham, K.; Kulawik, B.

    1999-08-01

    The most important water-soluble components of the vertebrate eye (lens proteins, aqueous humor, vitreous, hyaluronic acid, ascorbic acid) have been investigated in aqueous solution, after preceding X- or UV-irradiation. Spectroscopic, chromatographic, electrophoretic, hydrodynamic and analytic techniques have been applied, to monitor several radiation damages such as destruction of aromatic and sulfur-containing amino acids, aggregation, crosslinking, dissociation, fragmentation, and partial unfolding. Various substances were found which were able to protect eye components effectively against radiation, some of them being also of medical relevance.

  20. Development of radiation hardness components for ITER remote maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Obara, Kenjiro; Kakudate, Satoshi; Oka, Kiyoshi; Ito, Akira [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Yagi, Toshiaki; Morita, Yousuke

    1998-04-01

    In the ITER, in-vessel remote handling is required to assemble and maintain in-vessel components in DT operations. Since in-vessel remote handling systems must operate under intense gamma ray radiation exceeding 30 kGy/h, their components must have sufficiently high radiation hardness to allow maintenance long enough in ITER in-vessel environments. Thus, extensive radiation tests and quality improvement, including optimization of material compositions, have been conducted through the ITER R and D program to develop radiation hardness components that meet radiation doses from 10 to 100 MGy at 10 kGy/h. This paper presents the latest on radiation hardness component development conducted by the Japan Home Team as a contribution to the ITER. The remote handling components tested are categorized for use in robotic or viewing systems, or as common components. Radiation tests have been conducted on commercially available products for screening, on modified products, and on new products to improve the radiation hardness. (author)

  1. Development of radiation hardness components for ITER remote maintenance

    International Nuclear Information System (INIS)

    Obara, Kenjiro; Kakudate, Satoshi; Oka, Kiyoshi; Ito, Akira; Yagi, Toshiaki; Morita, Yousuke

    1998-01-01

    In the ITER, in-vessel remote handling is required to assemble and maintain in-vessel components in DT operations. Since in-vessel remote handling systems must operate under intense gamma ray radiation exceeding 30 kGy/h, their components must have sufficiently high radiation hardness to allow maintenance long enough in ITER in-vessel environments. Thus, extensive radiation tests and quality improvement, including optimization of material compositions, have been conducted through the ITER R and D program to develop radiation hardness components that meet radiation doses from 10 to 100 MGy at 10 kGy/h. This paper presents the latest on radiation hardness component development conducted by the Japan Home Team as a contribution to the ITER. The remote handling components tested are categorized for use in robotic or viewing systems, or as common components. Radiation tests have been conducted on commercially available products for screening, on modified products, and on new products to improve the radiation hardness. (author)

  2. Masked volume wise principal component analysis of small adrenocortical tumours in dynamic [11C]-metomidate positron emission tomography

    International Nuclear Information System (INIS)

    Razifar, Pasha; Hennings, Joakim; Monazzam, Azita; Hellman, Per; Långström, Bengt; Sundin, Anders

    2009-01-01

    In previous clinical Positron Emission Tomography (PET) studies novel approaches for application of Principal Component Analysis (PCA) on dynamic PET images such as Masked Volume Wise PCA (MVW-PCA) have been introduced. MVW-PCA was shown to be a feasible multivariate analysis technique, which, without modeling assumptions, could extract and separate organs and tissues with different kinetic behaviors into different principal components (MVW-PCs) and improve the image quality. In this study, MVW-PCA was applied to 14 dynamic 11C-metomidate-PET (MTO-PET) examinations of 7 patients with small adrenocortical tumours. MTO-PET was performed before and 3 days after starting per oral cortisone treatment. The whole dataset, reconstructed by filtered back projection (FBP) 0–45 minutes after the tracer injection, was used to study the tracer pharmacokinetics. Early, intermediate and late pharmacokinetic phases could be isolated in this manner. The MVW-PC1 images correlated well to the conventionally summed image data (15–45 minutes) but the image noise in the former was considerably lower. PET measurements performed by defining 'hot spot' regions of interest (ROIs) comprising 4 contiguous pixels with the highest radioactivity concentration showed a trend towards higher SUVs when the ROIs were outlined in the MVW-PC1 component than in the summed images. Time activity curves derived from '50% cut-off' ROIs based on an isocontour function whereby the pixels with SUVs between 50 to 100% of the highest radioactivity concentration were delineated, showed a significant decrease of the SUVs in normal adrenal glands and in adrenocortical adenomas after cortisone treatment. In addition to the clear decrease in image noise and the improved contrast between different structures with MVW-PCA, the results indicate that the definition of ROIs may be more accurate and precise in MVW-PC1 images than in conventional summed images. This might improve the precision of PET

  3. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    Science.gov (United States)

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  4. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    Science.gov (United States)

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  5. Principal Component Analysis-Based Pattern Analysis of Dose-Volume Histograms and Influence on Rectal Toxicity

    International Nuclear Information System (INIS)

    Soehn, Matthias; Alber, Markus; Yan Di

    2007-01-01

    Purpose: The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. Methods and Materials: PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as 'eigenmodes,' which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Results: Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe ∼94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses (∼40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. Conclusions: PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches

  6. Quantitative profiling of polar metabolites in herbal medicine injections for multivariate statistical evaluation based on independence principal component analysis.

    Directory of Open Access Journals (Sweden)

    Miaomiao Jiang

    Full Text Available Botanical primary metabolites extensively exist in herbal medicine injections (HMIs, but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI, was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA, the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs. IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs.

  7. Principal Component Surface (2011) for St. Thomas East End Reserve, St. Thomas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This image represents a 0.3x0.3 meter principal component analysis (PCA) surface for areas the St. Thomas East End Reserve (STEER) in the U.S. Virgin Islands (USVI)....

  8. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  9. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  10. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  11. Principal component analysis of FDG PET in amnestic MCI

    International Nuclear Information System (INIS)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco

    2008-01-01

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  12. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  13. The Use of Principal Components in Long-Range Forecasting

    Science.gov (United States)

    Chern, Jonq-Gong

    Large-scale modes of the global sea surface temperatures and the Northern Hemisphere tropospheric circulation are described by principal component analysis. The first and the second SST components well describe the El Nino episodes, and the El Nino index (ENI), suggested in this study, is consistent with the winter Southern Oscillation index (SOI), where this ENI is a composite component of the weighted first and second SST components. The large-scale interactive modes of the coupling ocean-atmosphere system are identified by cross-correlation analysis The result shows that the first SST component is strongly correlated with the first component of geopotential height in lead time of 6 months. In the El Nino-Southern Oscillation (ENSO) evolution, the El Nino mode strongly influences the winter tropospheric circulation in the mid -latitudes for up to three leading seasons. The regional long-range variation of climate is investigated with these major components of the SST and the tropospheric circulation. In the mid-latitude, the climate of the central United States shows a weak linkage with these large-scale circulations, and the climate of the western United States appears to be consistently associated with the ENSO modes. These El Nino modes also show a dominant influence on Eastern Asia as evidenced in Taiwan Mei-Yu patterns. Possible regional long-range forecasting schemes, utilizing the complementary characteristics of the winter El Nino mode and SST anomalies, are examined with the Taiwan Mei-Yu.

  14. Signal-to-noise contribution of principal component loads in reconstructed near-infrared Raman tissue spectra.

    Science.gov (United States)

    Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R

    2010-01-01

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can

  15. Automatic scatter detection in fluorescence landscapes by means of spherical principal component analysis

    DEFF Research Database (Denmark)

    Kotwa, Ewelina Katarzyna; Jørgensen, Bo Munk; Brockhoff, Per B.

    2013-01-01

    In this paper, we introduce a new method, based on spherical principal component analysis (S‐PCA), for the identification of Rayleigh and Raman scatters in fluorescence excitation–emission data. These scatters should be found and eliminated as a prestep before fitting parallel factor analysis...... models to the data, in order to avoid model degeneracies. The work is inspired and based on a previous research, where scatter removal was automatic (based on a robust version of PCA called ROBPCA) and required no visual data inspection but appeared to be computationally intensive. To overcome...... this drawback, we implement the fast S‐PCA in the scatter identification routine. Moreover, an additional pattern interpolation step that complements the method, based on robust regression, will be applied. In this way, substantial time savings are gained, and the user's engagement is restricted to a minimum...

  16. APPLYING PRINCIPAL COMPONENT ANALYSIS, MULTILAYER PERCEPTRON AND SELF-ORGANIZING MAPS FOR OPTICAL CHARACTER RECOGNITION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-11-01

    Full Text Available Optical Character Recognition plays an important role in data storage and data mining when the number of documents stored as images is increasing. It is expected to find the ways to convert images of typewritten or printed text into machine-encoded text effectively in order to support for the process of information handling effectively. In this paper, therefore, the techniques which are being used to convert image into editable text in the computer such as principal component analysis, multilayer perceptron network, self-organizing maps, and improved multilayer neural network using principal component analysis are experimented. The obtained results indicated the effectiveness and feasibility of the proposed methods.

  17. Development of radiation hard components for ITER blanket remote handling system

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Makiko, E-mail: saito.makiko@jaea.go.jp; Anzai, Katsunori; Maruyama, Takahito; Noguchi, Yuto; Ueno, Kenichi; Takeda, Nobukazu; Kakudate, Satoshi

    2016-11-01

    Highlights: • Clarify the components that will degrade by gamma ray irradiation. • Perform the irradiation tests to BRHS components. • Optimize the materials to increase the radiation hardness. - Abstract: The ITER blanket remote handling system (BRHS) will be operated in a high radiation environment (250 Gy/h max.) and must stably handle the blanket modules, which weigh 4.5 t and are more than 1.5 m in length, with a high degree of position and posture accuracy. The reliability of the system can be improved by reviewing the failure events of the system caused by high radiation. A failure mode and effects analysis (FMEA) identified failure modes and determined that lubricants, O-rings, and electric insulation cables were the dominant components affecting radiation hardness. Accordingly, we tried to optimize the lubricants and cables of the AC servo motors by using polyphenyl ether (PPE)-based grease and polyether ether ketone (PEEK), respectively. Materials containing radiation protective agents were also selected for the cable sheaths and O-rings to improve radiation hardness. Gamma ray irradiation tests were performed on these components and as a result, a radiation hardness of 8 MGy was achieved for the AC servo motors. On the other hand, to develop the radiation hardness and BRHS compatibility furthermore, the improvement of materials of cable and O ring were performed.

  18. Application of Principal Component Analysis in Prompt Gamma Spectra for Material Sorting

    Energy Technology Data Exchange (ETDEWEB)

    Im, Hee Jung; Lee, Yun Hee; Song, Byoung Chul; Park, Yong Joon; Kim, Won Ho

    2006-11-15

    For the detection of illicit materials in a very short time by comparing unknown samples' gamma spectra to pre-programmed material signatures, we at first, selected a method to reduce the noise of the obtained gamma spectra. After a noise reduction, a pattern recognition technique was applied to discriminate the illicit materials from the innocuous materials in the noise reduced data. Principal component analysis was applied for a noise reduction and pattern recognition in prompt gamma spectra. A computer program for the detection of illicit materials based on PCA method was developed in our lab and can be applied to the PGNAA system for the baggage checking at all ports of entry at a very short time.

  19. Sparse supervised principal component analysis (SSPCA) for dimension reduction and variable selection

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Ghodsi, Ali; Clemmensen, Line H.

    2017-01-01

    Principal component analysis (PCA) is one of the main unsupervised pre-processing methods for dimension reduction. When the training labels are available, it is worth using a supervised PCA strategy. In cases that both dimension reduction and variable selection are required, sparse PCA (SPCA...

  20. Principal modes of rupture encountered in expertise of advanced components

    International Nuclear Information System (INIS)

    Tavassoli, A.A.; Bougault, A.

    1986-10-01

    Failure of many metallic components investigated can be classified into two categories: intergranular or transgranular according to their principal mode of rupture. Intergranular ruptures are often provoked by segregation of impurities at the grain boundaries. Three examples are cited where this phenomenon occured, one of them is a steel (A 508 cl 3) used for PWR vessel. Intergranular failures are in general induced by fatigue in the advanced components operating under thermal or load transients. One example concerning a sodium mixer which was subjected to thermal loadings is presented. Examples of stress corrosion and intergranular sensitization failures are cited. These examples show the importance of fractography for the determination of rupture causes [fr

  1. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  2. The application of principal component analysis to quantify technique in sports.

    Science.gov (United States)

    Federolf, P; Reid, R; Gilgien, M; Haugen, P; Smith, G

    2014-06-01

    Analyzing an athlete's "technique," sport scientists often focus on preselected variables that quantify important aspects of movement. In contrast, coaches and practitioners typically describe movements in terms of basic postures and movement components using subjective and qualitative features. A challenge for sport scientists is finding an appropriate quantitative methodology that incorporates the holistic perspective of human observers. Using alpine ski racing as an example, this study explores principal component analysis (PCA) as a mathematical method to decompose a complex movement pattern into its main movement components. Ski racing movements were recorded by determining the three-dimensional coordinates of 26 points on each skier which were subsequently interpreted as a 78-dimensional posture vector at each time point. PCA was then used to determine the mean posture and principal movements (PMk ) carried out by the athletes. The first four PMk contained 95.5 ± 0.5% of the variance in the posture vectors which quantified changes in body inclination, vertical or fore-aft movement of the trunk, and distance between skis. In summary, calculating PMk offered a data-driven, quantitative, and objective method of analyzing human movement that is similar to how human observers such as coaches or ski instructors would describe the movement. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    Science.gov (United States)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  4. Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.

    Science.gov (United States)

    Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko

    2017-12-01

    Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.

  5. Tritium in fusion reactor components

    International Nuclear Information System (INIS)

    Watson, J.S.; Fisher, P.W.; Talbot, J.B.

    1980-01-01

    When tritium is used in a fusion energy experiment or reactor, several implications affect and usually restrict the design and operation of the system and involve questions of containment, inventory, and radiation damage. Containment is expected to be particularly important both for high-temperature components and for those components that are prone to require frequent maintenance. Inventory is currently of major significance in cases where safety and environmental considerations limit the experiments to very low levels of tritium. Fewer inventory restrictions are expected as fusion experiments are placed in more-remote locations and as the fusion community gains experience with the use of tritium. However, the advent of power-producing experiments with high-duty cycle will again lead to serious difficulties based principally on tritium availability; cyclic operations with significant regeneration times are the principal problems

  6. Real-time detection of organic contamination events in water distribution systems by principal components analysis of ultraviolet spectral data.

    Science.gov (United States)

    Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2017-05-01

    The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.

  7. CMB constraints on principal components of the inflaton potential

    International Nuclear Information System (INIS)

    Dvorkin, Cora; Hu, Wayne

    2010-01-01

    We place functional constraints on the shape of the inflaton potential from the cosmic microwave background through a variant of the generalized slow-roll approximation that allows large amplitude, rapidly changing deviations from scale-free conditions. Employing a principal component decomposition of the source function G ' ≅3(V ' /V) 2 -2V '' /V and keeping only those measured to better than 10% results in 5 nearly independent Gaussian constraints that may be used to test any single-field inflationary model where such deviations are expected. The first component implies <3% variations at the 100 Mpc scale. One component shows a 95% CL preference for deviations around the 300 Mpc scale at the ∼10% level but the global significance is reduced considering the 5 components examined. This deviation also requires a change in the cold dark matter density which in a flat ΛCDM model is disfavored by current supernova and Hubble constant data and can be tested with future polarization or high multipole temperature data. Its impact resembles a local running of the tilt from multipoles 30-800 but is only marginally consistent with a constant running beyond this range. For this analysis, we have implemented a ∼40x faster WMAP7 likelihood method which we have made publicly available.

  8. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  9. Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M

    2016-10-03

    Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.

  10. Blood hyperviscosity identification with reflective spectroscopy of tongue tip based on principal component analysis combining artificial neural network.

    Science.gov (United States)

    Liu, Ming; Zhao, Jing; Lu, XiaoZuo; Li, Gang; Wu, Taixia; Zhang, LiFu

    2018-05-10

    With spectral methods, noninvasive determination of blood hyperviscosity in vivo is very potential and meaningful in clinical diagnosis. In this study, 67 male subjects (41 health, and 26 hyperviscosity according to blood sample analysis results) participate. Reflectance spectra of subjects' tongue tips is measured, and a classification method bases on principal component analysis combined with artificial neural network model is built to identify hyperviscosity. Hold-out and Leave-one-out methods are used to avoid significant bias and lessen overfitting problem, which are widely accepted in the model validation. To measure the performance of the classification, sensitivity, specificity, accuracy and F-measure are calculated, respectively. The accuracies with 100 times Hold-out method and 67 times Leave-one-out method are 88.05% and 97.01%, respectively. Experimental results indicate that the built classification model has certain practical value and proves the feasibility of using spectroscopy to identify hyperviscosity by noninvasive determination.

  11. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  12. QUANTITATIVE ELECTRONIC STRUCTURE - ACTIVITY RELATIONSHIP OF ANTIMALARIAL COMPOUND OF ARTEMISININ DERIVATIVES USING PRINCIPAL COMPONENT REGRESSION APPROACH

    Directory of Open Access Journals (Sweden)

    Paul Robert Martin Werfette

    2010-06-01

    Full Text Available Analysis of quantitative structure - activity relationship (QSAR for a series of antimalarial compound artemisinin derivatives has been done using principal component regression. The descriptors for QSAR study were representation of electronic structure i.e. atomic net charges of the artemisinin skeleton calculated by AM1 semi-empirical method. The antimalarial activity of the compound was expressed in log 1/IC50 which is an experimental data. The main purpose of the principal component analysis approach is to transform a large data set of atomic net charges to simplify into a data set which known as latent variables. The best QSAR equation to analyze of log 1/IC50 can be obtained from the regression method as a linear function of several latent variables i.e. x1, x2, x3, x4 and x5. The best QSAR model is expressed in the following equation,  (;;   Keywords: QSAR, antimalarial, artemisinin, principal component regression

  13. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    Science.gov (United States)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  14. Ionizing radiations simulation on bipolar components

    International Nuclear Information System (INIS)

    Montagner, X.

    1999-01-01

    This thesis presents the ionizing radiation effects on bipolar components and more specially their behavior facing the total dose. The first part is devoted to the radiation environments with a special attention to the spatial environments and new emergent environments. The specificities of bipolar components are then presented and their behavior facing the interactions. The physical mechanisms bound to the dose rate are also discussed. The second part presents a physical analysis of degradations induced by the cumulated dosimetry on bipolar components and simulation with the ATLAS code. The third part exposes an electric empirical simulation induced by the cumulated dose in static conditions. (A.L.B.)

  15. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Yuan-Pin Lin

    2017-07-01

    Full Text Available Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability, arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18 and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40% as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing neither achieved the cross-day classification (the accuracy was around chance level nor replicated the encouraging improvement due to the inter-day EEG variability. This result

  16. Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat

    2015-01-01

    A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC), is that the data are independent in time. In many industrial processes, frequent sampling and process dynamics make this assumption unrealistic rendering sampled...

  17. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  18. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  19. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  20. An application of principal component analysis to the clavicle and clavicle fixation devices.

    Science.gov (United States)

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-03-26

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  1. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    HRISTIAN Liliana

    2017-05-01

    Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.

  2. Visualizing solvent mediated phase transformation behavior of carbamazepine polymorphs by principal component analysis

    DEFF Research Database (Denmark)

    Tian, Fang; Rades, Thomas; Sandler, Niklas

    2008-01-01

    The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...

  3. Radiation damage to electronic components

    International Nuclear Information System (INIS)

    Battisti, S.; Bossart, R.; Schoenbacher, H.; Van de Voorde, M.

    1975-01-01

    Characteristic properties are presented of some 40 different electronic components (resistors, capacitors, diodes, transistors, and integrated circuits) which were irradiated in a nuclear reactor up to 1015 n/cm 2 (E > 1 MeV). Complete circuits (e.g. RF amplifiers and detectors, mixers, differential amplifiers, voltage-to-frequency converters, oscillators, power supplies) were irradiated near the CERN Intersecting Storage Rings up to 106 rad(RPL) (dose measured with radiophotoluminescent dosimeters) under simulated operational conditions. Representative measured parameters, such as resistance, capacitance, forward voltage, reverse current, toggle frequencies, are given in graphs as a function of radiation dose. The results are discussed in detail and lead to the over-all conclusion that the operation of electronic components and circuits is seriously affected by radiation environments with doses in the order of 10 13 n/cm 2 or 10 4 rad(RPL); some components and circuits fail completely at doses of 10 14 n/cm 2 or 10 5 rad(RPL). (Author)

  4. Improved Principal Component Analysis for Anomaly Detection: Application to an Emergency Department

    KAUST Repository

    Harrou, Fouzi

    2015-07-03

    Monitoring of production systems, such as those in hospitals, is primordial for ensuring the best management and maintenance desired product quality. Detection of emergent abnormalities allows preemptive actions that can prevent more serious consequences. Principal component analysis (PCA)-based anomaly-detection approach has been used successfully for monitoring systems with highly correlated variables. However, conventional PCA-based detection indices, such as the Hotelling’s T2T2 and the Q statistics, are ill suited to detect small abnormalities because they use only information from the most recent observations. Other multivariate statistical metrics, such as the multivariate cumulative sum (MCUSUM) control scheme, are more suitable for detection small anomalies. In this paper, a generic anomaly detection scheme based on PCA is proposed to monitor demands to an emergency department. In such a framework, the MCUSUM control chart is applied to the uncorrelated residuals obtained from the PCA model. The proposed PCA-based MCUSUM anomaly detection strategy is successfully applied to the practical data collected from the database of the pediatric emergency department in the Lille Regional Hospital Centre, France. The detection results evidence that the proposed method is more effective than the conventional PCA-based anomaly-detection methods.

  5. Improved Principal Component Analysis for Anomaly Detection: Application to an Emergency Department

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Chaabane, Sondé s; Tahon, Christian; Sun, Ying

    2015-01-01

    Monitoring of production systems, such as those in hospitals, is primordial for ensuring the best management and maintenance desired product quality. Detection of emergent abnormalities allows preemptive actions that can prevent more serious consequences. Principal component analysis (PCA)-based anomaly-detection approach has been used successfully for monitoring systems with highly correlated variables. However, conventional PCA-based detection indices, such as the Hotelling’s T2T2 and the Q statistics, are ill suited to detect small abnormalities because they use only information from the most recent observations. Other multivariate statistical metrics, such as the multivariate cumulative sum (MCUSUM) control scheme, are more suitable for detection small anomalies. In this paper, a generic anomaly detection scheme based on PCA is proposed to monitor demands to an emergency department. In such a framework, the MCUSUM control chart is applied to the uncorrelated residuals obtained from the PCA model. The proposed PCA-based MCUSUM anomaly detection strategy is successfully applied to the practical data collected from the database of the pediatric emergency department in the Lille Regional Hospital Centre, France. The detection results evidence that the proposed method is more effective than the conventional PCA-based anomaly-detection methods.

  6. Principal component analysis of the nonlinear coupling of harmonic modes in heavy-ion collisions

    Science.gov (United States)

    BoŻek, Piotr

    2018-03-01

    The principal component analysis of flow correlations in heavy-ion collisions is studied. The correlation matrix of harmonic flow is generalized to correlations involving several different flow vectors. The method can be applied to study the nonlinear coupling between different harmonic modes in a double differential way in transverse momentum or pseudorapidity. The procedure is illustrated with results from the hydrodynamic model applied to Pb + Pb collisions at √{sN N}=2760 GeV. Three examples of generalized correlations matrices in transverse momentum are constructed corresponding to the coupling of v22 and v4, of v2v3 and v5, or of v23,v33 , and v6. The principal component decomposition is applied to the correlation matrices and the dominant modes are calculated.

  7. Principal component analysis reveals gender-specific predictors of cardiometabolic risk in 6th graders

    Directory of Open Access Journals (Sweden)

    Peterson Mark D

    2012-11-01

    Full Text Available Abstract Background The purpose of this study was to determine the sex-specific pattern of pediatric cardiometabolic risk with principal component analysis, using several biological, behavioral and parental variables in a large cohort (n = 2866 of 6th grade students. Methods Cardiometabolic risk components included waist circumference, fasting glucose, blood pressure, plasma triglycerides levels and HDL-cholesterol. Principal components analysis was used to determine the pattern of risk clustering and to derive a continuous aggregate score (MetScore. Stratified risk components and MetScore were analyzed for association with age, body mass index (BMI, cardiorespiratory fitness (CRF, physical activity (PA, and parental factors. Results In both boys and girls, BMI and CRF were associated with multiple risk components, and overall MetScore. Maternal smoking was associated with multiple risk components in girls and boys, as well as MetScore in boys, even after controlling for children’s BMI. Paternal family history of early cardiovascular disease (CVD and parental age were associated with increased blood pressure and MetScore for girls. Children’s PA levels, maternal history of early CVD, and paternal BMI were also indicative for various risk components, but not MetScore. Conclusions Several biological and behavioral factors were independently associated with children’s cardiometabolic disease risk, and thus represent a unique gender-specific risk profile. These data serve to bolster the independent contribution of CRF, PA, and family-oriented healthy lifestyles for improving children’s health.

  8. Oil classification using X-ray scattering and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T., E-mail: dani.almeida84@gmail.com, E-mail: ricardo@lin.ufrj.br, E-mail: amandass@bioqmed.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil); Oliveira, Davi F.; Anjos, Marcelino J., E-mail: davi.oliveira@uerj.br, E-mail: marcelin@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica Armando Dias Tavares

    2015-07-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  9. Oil classification using X-ray scattering and principal component analysis

    International Nuclear Information System (INIS)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T.; Oliveira, Davi F.; Anjos, Marcelino J.

    2015-01-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  10. Identification of Counterfeit Alcoholic Beverages Using Cluster Analysis in Principal-Component Space

    Science.gov (United States)

    Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.

    2017-07-01

    A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.

  11. An application of principal component analysis to the clavicle and clavicle fixation devices

    Directory of Open Access Journals (Sweden)

    Fitzpatrick David

    2010-03-01

    Full Text Available Abstract Background Principal component analysis (PCA enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and methods Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. Results The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. Discussion And Conclusions This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  12. Modelling the Load Curve of Aggregate Electricity Consumption Using Principal Components

    OpenAIRE

    Matteo Manera; Angelo Marzullo

    2003-01-01

    Since oil is a non-renewable resource with a high environmental impact, and its most common use is to produce combustibles for electricity, reliable methods for modelling electricity consumption can contribute to a more rational employment of this hydrocarbon fuel. In this paper we apply the Principal Components (PC) method to modelling the load curves of Italy, France and Greece on hourly data of aggregate electricity consumption. The empirical results obtained with the PC approach are compa...

  13. Application of mass spectrometry based electronic nose and chemometrics for fingerprinting radiation treatment

    Science.gov (United States)

    Gupta, Sumit; Variyar, Prasad S.; Sharma, Arun

    2015-01-01

    Volatile compounds were isolated from apples and grapes employing solid phase micro extraction (SPME) and subsequently analyzed by GC/MS equipped with a transfer line without stationary phase. Single peak obtained was integrated to obtain total mass spectrum of the volatile fraction of samples. A data matrix having relative abundance of all mass-to-charge ratios was subjected to principal component analysis (PCA) and linear discriminant analysis (LDA) to identify radiation treatment. PCA results suggested that there is sufficient variability between control and irradiated samples to build classification models based on supervised techniques. LDA successfully aided in segregating control from irradiated samples at all doses (0.1, 0.25, 0.5, 1.0, 1.5, 2.0 kGy). SPME-MS with chemometrics was successfully demonstrated as simple screening method for radiation treatment.

  14. Radiation damage in CTR magnet components

    International Nuclear Information System (INIS)

    Ullmaier, H.

    1976-01-01

    Data are reviewed (already existing or to be acquired) which should allow prediction of the behavior of large superconducting coils in the radiation field of a future fusion reactor. The electrical and mechanical stability of such magnets is determined by the irradiation induced deterioration of the magnet components, i.e., (a) changes in critical current, field and temperature of the superconductor (NbTi, A-15 phases), (b) resistivity increase in the stabilizer (Cu, Al), and (c) changes in mechanical and dielectric properties of insulators and spacers. Recent low temperature simulation experiments (with fission neutrons and heavy ions) show that the superconductor will not be the critical component of a fusion magnet--at least as far as radiation damage is concerned. Much more severe is the loss of stability due to the resistivity increase of the stabilizing material. It seems, however, that the magnitude of this effect can be predicted rather reliably and therefore taken into account in the coil design. Almost no data exist about the low temperature behavior of insulator and spacer materials in a radiation field. Furthermore, very little is known about the nature of the radiation damage in non-metals, which makes extrapolations of the few existing data to other materials or to other doses highly speculative. Only future experiments can decide if the insulators will be the limiting component of a CTR magnet or not

  15. Principal component analysis of solar flares in the soft X-ray flux

    International Nuclear Information System (INIS)

    Teuber, D.L.; Reichmann, E.J.; Wilson, R.M.; National Aeronautics and Space Administration, Huntsville, AL

    1979-01-01

    Principal component analysis is a technique for extracting the salient features from a mass of data. It applies, in particular, to the analysis of nonstationary ensembles. Computational schemes for this task require the evaluation of eigenvalues of matrices. We have used EISPACK Matrix Eigen System Routines on an IBM 360-75 to analyze full-disk proportional-counter data from the X-ray event analyzer (X-REA) which was part of the Skylab ATM/S-056 experiment. Empirical orthogonal functions have been derived for events in the soft X-ray spectrum between 2.5 and 20 A during different time frames between June 1973 and January 1974. Results indicate that approximately 90% of the cumulative power of each analyzed flare is contained in the largest eigenvector. The first two largest eigenvectors are sufficient for an empirical curve-fit through the raw data and a characterization of solar flares in the soft X-ray flux. Power spectra of the two largest eigenvectors reveal a previously reported periodicity of approximately 5 min. Similar signatures were also obtained from flares that are synchronized on maximum pulse-height when subjected to a principal component analysis. (orig.)

  16. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    Science.gov (United States)

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  17. Using principal component analysis to understand the variability of PDS 456

    Science.gov (United States)

    Parker, M. L.; Reeves, J. N.; Matzeu, G. A.; Buisson, D. J. K.; Fabian, A. C.

    2018-02-01

    We present a spectral-variability analysis of the low-redshift quasar PDS 456 using principal component analysis. In the XMM-Newton data, we find a strong peak in the first principal component at the energy of the Fe absorption line from the highly blueshifted outflow. This indicates that the absorption feature is more variable than the continuum, and that it is responding to the continuum. We find qualitatively different behaviour in the Suzaku data, which is dominated by changes in the column density of neutral absorption. In this case, we find no evidence of the absorption produced by the highly ionized gas being correlated with this variability. Additionally, we perform simulations of the source variability, and demonstrate that PCA can trivially distinguish between outflow variability correlated, anticorrelated and un-correlated with the continuum flux. Here, the observed anticorrelation between the absorption line equivalent width and the continuum flux may be due to the ionization of the wind responding to the continuum. Finally, we compare our results with those found in the narrow-line Seyfert 1 IRAS 13224-3809. We find that the Fe K UFO feature is sharper and more prominent in PDS 456, but that it lacks the lower energy features from lighter elements found in IRAS 13224-3809, presumably due to differences in ionization.

  18. Topographical characteristics and principal component structure of the hypnagogic EEG.

    Science.gov (United States)

    Tanaka, H; Hayashi, M; Hori, T

    1997-07-01

    The purpose of the present study was to identify the dominant topographic components of electroencephalographs (EEG) and their behavior during the waking-sleeping transition period. Somnography of nocturnal sleep was recorded on 10 male subjects. Each recording, from "lights-off" to 5 minutes after the appearance of the first sleep spindle, was analyzed. The typical EEG patterns during hypnagogic period were classified into nine EEG stages. Topographic maps demonstrated that the dominant areas of alpha-band activity moved from the posterior areas to anterior areas along the midline of the scalp. In delta-, theta-, and sigma-band activities, the differences of EEG amplitude between the focus areas (the dominant areas) and the surrounding areas increased as a function of EEG stage. To identify the dominant topographic components, a principal component analysis was carried out on a 12-channel EEG data set for each of six frequency bands. The dominant areas of alpha 2- (9.6-11.4 Hz) and alpha 3- (11.6-13.4 Hz) band activities moved from the posterior to anterior areas, respectively. The distribution of alpha 2-band activity on the scalp clearly changed just after EEG stage 3 (alpha intermittent, < 50%). On the other hand, alpha 3-band activity became dominant in anterior areas after the appearance of vertex sharp-wave bursts (EEG stage 7). For the sigma band, the amplitude of extensive areas from the frontal pole to the parietal showed a rapid rise after the onset of stage 7 (the appearance of vertex sharp-wave bursts). Based on the results, sleep onset process probably started before the onset of sleep stage 1 in standard criteria. On the other hand, the basic sleep process may start before the onset of sleep stage 2 or the manually scored spindles.

  19. Application of principal component analysis to time series of daily air pollution and mortality

    NARCIS (Netherlands)

    Quant C; Fischer P; Buringh E; Ameling C; Houthuijs D; Cassee F; MGO

    2004-01-01

    We investigated whether cause-specific daily mortality can be attributed to specific sources of air pollution. To construct indicators of source-specific air pollution, we applied a principal component analysis (PCA) on routinely collected air pollution data in the Netherlands during the period

  20. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  1. Assessing the effect of oil price on world food prices: Application of principal component analysis

    International Nuclear Information System (INIS)

    Esmaeili, Abdoulkarim; Shokoohi, Zainab

    2011-01-01

    The objective of this paper is to investigate the co-movement of food prices and the macroeconomic index, especially the oil price, by principal component analysis to further understand the influence of the macroeconomic index on food prices. We examined the food prices of seven major products: eggs, meat, milk, oilseeds, rice, sugar and wheat. The macroeconomic variables studied were crude oil prices, consumer price indexes, food production indexes and GDP around the world between 1961 and 2005. We use the Scree test and the proportion of variance method for determining the optimal number of common factors. The correlation coefficient between the extracted principal component and the macroeconomic index varies between 0.87 for the world GDP and 0.36 for the consumer price index. We find the food production index has the greatest influence on the macroeconomic index and that the oil price index has an influence on the food production index. Consequently, crude oil prices have an indirect effect on food prices. - Research Highlights: →We investigate the co-movement of food prices and the macroeconomic index. →The crude oil price has indirect effect on the world GDP via its impacts on food production index. →The food production index is the source of causation for CPI and GDP is affected by CPI. →The results confirm an indirect effect among oil price, food price principal component.

  2. Variability search in M 31 using principal component analysis and the Hubble Source Catalogue

    Science.gov (United States)

    Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.

    2018-06-01

    Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.

  3. k-t PCA: temporally constrained k-t BLAST reconstruction using principal component analysis

    DEFF Research Database (Denmark)

    Pedersen, Henrik; Kozerke, Sebastian; Ringgaard, Steffen

    2009-01-01

    in applications exhibiting a broad range of temporal frequencies such as free-breathing myocardial perfusion imaging. We show that temporal basis functions calculated by subjecting the training data to principal component analysis (PCA) can be used to constrain the reconstruction such that the temporal resolution...... is improved. The presented method is called k-t PCA....

  4. Effects of physiotherapy treatment on knee osteoarthritis gait data using principal component analysis.

    Science.gov (United States)

    Gaudreault, Nathaly; Mezghani, Neila; Turcot, Katia; Hagemeister, Nicola; Boivin, Karine; de Guise, Jacques A

    2011-03-01

    Interpreting gait data is challenging due to intersubject variability observed in the gait pattern of both normal and pathological populations. The objective of this study was to investigate the impact of using principal component analysis for grouping knee osteoarthritis (OA) patients' gait data in more homogeneous groups when studying the effect of a physiotherapy treatment. Three-dimensional (3D) knee kinematic and kinetic data were recorded during the gait of 29 participants diagnosed with knee OA before and after they received 12 weeks of physiotherapy treatment. Principal component analysis was applied to extract groups of knee flexion/extension, adduction/abduction and internal/external rotation angle and moment data. The treatment's effect on parameters of interest was assessed using paired t-tests performed before and after grouping the knee kinematic data. Increased quadriceps and hamstring strength was observed following treatment (Pphysiotherapy on gait mechanics of knee osteoarthritis patients may be masked or underestimated if kinematic data are not separated into more homogeneous groups when performing pre- and post-treatment comparisons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Lost-in-Space Star Identification Using Planar Triangle Principal Component Analysis Algorithm

    Directory of Open Access Journals (Sweden)

    Fuqiang Zhou

    2015-01-01

    Full Text Available It is a challenging task for a star sensor to implement star identification and determine the attitude of a spacecraft in the lost-in-space mode. Several algorithms based on triangle method are proposed for star identification in this mode. However, these methods hold great time consumption and large guide star catalog memory size. The star identification performance of these methods requires improvements. To address these problems, a star identification algorithm using planar triangle principal component analysis is presented here. A star pattern is generated based on the planar triangle created by stars within the field of view of a star sensor and the projection of the triangle. Since a projection can determine an index for a unique triangle in the catalog, the adoption of the k-vector range search technique makes this algorithm very fast. In addition, a sharing star validation method is constructed to verify the identification results. Simulation results show that the proposed algorithm is more robust than the planar triangle and P-vector algorithms under the same conditions.

  6. Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis

    Science.gov (United States)

    Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.

    2013-06-01

    Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.

  7. Registration of dynamic dopamine D2receptor images using principal component analysis

    International Nuclear Information System (INIS)

    Acton, P.D.; Ell, P.J.; Pilowsky, L.S.; Brammer, M.J.; Suckling, J.

    1997-01-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D 2 receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D 2 receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P 123 I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the χ 2 of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.)

  8. Portable XRF and principal component analysis for bill characterization in forensic science

    International Nuclear Information System (INIS)

    Appoloni, C.R.; Melquiades, F.L.

    2014-01-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. - Highlights: • The paper is about a direct method for bills discrimination by EDXRF and principal component analysis. • The bills are analyzed directly, without sample preparation and non destructively. • The results demonstrates that the methodology is feasible and could be applied in forensic science for identification of origin and false banknotes. • The novelty is that portable EDXRF is very fast and efficient for bills characterization

  9. Tribological Performance Optimization of Electroless Ni-P-W Coating Using Weighted Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Roy

    2013-12-01

    Full Text Available The present investigation is an experimental approach to deposit electroless Ni-P-W coating on mild steel substrate and find out the optimum combination of various tribological performances on the basis of minimum friction and wear, using weighted principal component analysis (WPCA. In this study three main tribological parameters are chosen viz. load (A, speed (B and time(C. The responses are coefficient of friction and wear depth. Here Weighted Principal Component Analysis (WPCA method is adopted to convert the multi-responses into single performance index called multiple performance index (MPI and Taguchi L27 orthogonal array is used to design the experiment and to find the optimum combination of tribological parameters for minimum coefficient of friction and wear depth. ANOVA is performed to find the significance of the each tribological process parameters and their interactions. The EDX analysis, SEM and XRD are performed to study the composition and structural aspects.

  10. Application of principal component and factor analyses in electron spectroscopy

    International Nuclear Information System (INIS)

    Siuda, R.; Balcerowska, G.

    1998-01-01

    Fundamentals of two methods, taken from multivariate analysis and known as principal component analysis (PCA) and factor analysis (FA), are presented. Both methods are well known in chemometrics. Since 1979, when application of the methods to electron spectroscopy was reported for the first time, they became to be more and more popular in different branches of electron spectroscopy. The paper presents examples of standard applications of the method of Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), and electron energy loss spectroscopy (EELS). Advantages one can take from application of the methods, their potentialities as well as their limitations are pointed out. (author)

  11. Radiation effects on eye components

    International Nuclear Information System (INIS)

    Durchschlag, H.; Fochler, C.; Abraham, K.; Kulawik, B.

    1998-01-01

    The radiation damage (X-ray, UV light) of the most important components of the vertebrate eye (crystallins and other proteins, hyaluronic acid, vitreous, aqueous humour, ascorbic acid) has been investigated by various methods of physical chemistry. UV absorption and fluorescence spectroscopy as well as circular dichroism unveiled changes of the chromophores/fluorophores of the constituent biopolymers and low-molecular components, together with alterations of helix content and the occurrence of aggregation. Size-exclusion chromatography, analytical ultracentrifugation, densimetry, viscometry and light scattering experiments monitored changes of the global structure of proteins and polysaccharides involved. Electrophoreses allowed conclusions on fragmentation, unfolding and crosslinking. Analytical methods provided information regarding the integrity of groups of special concern (SH, SS) and revealed the existence of stable noxious species (H 2 O 2 ). By means of various measures and additives, manifold modifications of the impact of both ionizing and nonionizing radiation may be achieved. Caused by differences in the primary reactions, eye polymers are protected efficaciously by typical OH radical scavengers against X-irradiation, whereas compounds which exhibit absorption behavior in the UV range turn out to act as potent protectives ('chemical filters') against UV light. A few substances, such as ascorbate, are able to provide protection against both sorts of radiation and are even able to exhibit a slight chemical repair of already damaged particles. The results obtained are of importance for understanding pathological alterations of the eye (loss of transparency, cataractogenesis) and for developing new strategies for protection and repair of eye components. (author)

  12. Measuring farm sustainability using data envelope analysis with principal components: the case of Wisconsin cranberry.

    Science.gov (United States)

    Dong, Fengxia; Mitchell, Paul D; Colquhoun, Jed

    2015-01-01

    Measuring farm sustainability performance is a crucial component for improving agricultural sustainability. While extensive assessments and indicators exist that reflect the different facets of agricultural sustainability, because of the relatively large number of measures and interactions among them, a composite indicator that integrates and aggregates over all variables is particularly useful. This paper describes and empirically evaluates a method for constructing a composite sustainability indicator that individually scores and ranks farm sustainability performance. The method first uses non-negative polychoric principal component analysis to reduce the number of variables, to remove correlation among variables and to transform categorical variables to continuous variables. Next the method applies common-weight data envelope analysis to these principal components to individually score each farm. The method solves weights endogenously and allows identifying important practices in sustainability evaluation. An empirical application to Wisconsin cranberry farms finds heterogeneity in sustainability practice adoption, implying that some farms could adopt relevant practices to improve the overall sustainability performance of the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Evaluation of in-core measurements by means of principal components method

    International Nuclear Information System (INIS)

    Makai, M.; Temesvari, E.

    1996-01-01

    Surveillance of a nuclear reactor core comprehends determination of assemblies' three-dimensional (3D) power distribution. Derived from other assemblies' measured values, power of non-measured assembly is calculated for every assembly with the help of principal components method (PCM) which is also presented. The measured values are interpolated for different geometrical coverings of the WWER-440 core. Different procedures have been elaborated and investigated, among them the most successful methods are discussed. Each method offers self consistent means to determine numerical errors of the interpolated values. (author). 13 refs, 7 figs, 2 tabs

  14. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    Science.gov (United States)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  15. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    Science.gov (United States)

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  16. Online Monitoring of Water-Quality Anomaly in Water Distribution Systems Based on Probabilistic Principal Component Analysis by UV-Vis Absorption Spectroscopy

    Directory of Open Access Journals (Sweden)

    Dibo Hou

    2014-01-01

    Full Text Available This study proposes a probabilistic principal component analysis- (PPCA- based method for online monitoring of water-quality contaminant events by UV-Vis (ultraviolet-visible spectroscopy. The purpose of this method is to achieve fast and sound protection against accidental and intentional contaminate injection into the water distribution system. The method is achieved first by properly imposing a sliding window onto simultaneously updated online monitoring data collected by the automated spectrometer. The PPCA algorithm is then executed to simplify the large amount of spectrum data while maintaining the necessary spectral information to the largest extent. Finally, a monitoring chart extensively employed in fault diagnosis field methods is used here to search for potential anomaly events and to determine whether the current water-quality is normal or abnormal. A small-scale water-pipe distribution network is tested to detect water contamination events. The tests demonstrate that the PPCA-based online monitoring model can achieve satisfactory results under the ROC curve, which denotes a low false alarm rate and high probability of detecting water contamination events.

  17. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    Science.gov (United States)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  18. A principal components approach to parent-to-newborn body composition associations in South India

    Directory of Open Access Journals (Sweden)

    Hill Jacqueline C

    2009-02-01

    Full Text Available Abstract Background Size at birth is influenced by environmental factors, like maternal nutrition and parity, and by genes. Birth weight is a composite measure, encompassing bone, fat and lean mass. These may have different determinants. The main purpose of this paper was to use anthropometry and principal components analysis (PCA to describe maternal and newborn body composition, and associations between them, in an Indian population. We also compared maternal and paternal measurements (body mass index (BMI and height as predictors of newborn body composition. Methods Weight, height, head and mid-arm circumferences, skinfold thicknesses and external pelvic diameters were measured at 30 ± 2 weeks gestation in 571 pregnant women attending the antenatal clinic of the Holdsworth Memorial Hospital, Mysore, India. Paternal height and weight were also measured. At birth, detailed neonatal anthropometry was performed. Unrotated and varimax rotated PCA was applied to the maternal and neonatal measurements. Results Rotated PCA reduced maternal measurements to 4 independent components (fat, pelvis, height and muscle and neonatal measurements to 3 components (trunk+head, fat, and leg length. An SD increase in maternal fat was associated with a 0.16 SD increase (β in neonatal fat (p Conclusion Principal components analysis is a useful method to describe neonatal body composition and its determinants. Newborn adiposity is related to maternal nutritional status and parity, while newborn length is genetically determined. Further research is needed to understand mechanisms linking maternal pelvic size to fetal growth and the determinants and implications of the components (trunk v leg length of fetal skeletal growth.

  19. Retrieving the correlation matrix from a truncated PCA solution : The inverse principal component problem

    NARCIS (Netherlands)

    ten Berge, Jos M.F.; Kiers, Henk A.L.

    When r Principal Components are available for k variables, the correlation matrix is approximated in the least squares sense by the loading matrix times its transpose. The approximation is generally not perfect unless r = k. In the present paper it is shown that, when r is at or above the Ledermann

  20. Single shot fringe pattern phase demodulation using Hilbert-Huang transform aided by the principal component analysis.

    Science.gov (United States)

    Trusiak, Maciej; Służewski, Łukasz; Patorski, Krzysztof

    2016-02-22

    Hybrid single shot algorithm for accurate phase demodulation of complex fringe patterns is proposed. It employs empirical mode decomposition based adaptive fringe pattern enhancement (i.e., denoising, background removal and amplitude normalization) and subsequent boosted phase demodulation using 2D Hilbert spiral transform aided by the Principal Component Analysis method for novel, correct and accurate local fringe direction map calculation. Robustness to fringe pattern significant noise, uneven background and amplitude modulation as well as local fringe period and shape variations is corroborated by numerical simulations and experiments. Proposed automatic, adaptive, fast and comprehensive fringe analysis solution compares favorably with other previously reported techniques.

  1. Simulations of cloudy hyperspectral infrared radiances using the HT-FRTC, a fast PC-based multipurpose radiative transfer code

    Science.gov (United States)

    Havemann, S.; Aumann, H. H.; Desouza-Machado, S. G.

    2017-12-01

    The HT-FRTC uses principal components which cover the spectrum at a very high spectral resolution allowing very fast line-by-line-like, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. Using data from IASI and from the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM BAE 146 aircraft, variational retrievals in principal component space with HT-FRTC as forward model have demonstrated that valuable information on temperature and humidity profiles and on the cirrus cloud properties can be obtained simultaneously. The NASA/JPL/UMBC cloudy RTM inter-comparison project has been working on a global dataset consisting of 7377 AIRS spectra. Initial simulations with HT-FRTC for this dataset have been promising. A next step taken here is to investigate how sensitive the results are with respect to different assumptions in the cloud modelling. One aspect of this is to study how assumptions about the microphysical and related optical properties of liquid/ice clouds impact the statistics of the agreement between model and observations. The other aspect is about the cloud overlap scheme. Different schemes have been tested (maximum, random, maximum random). As the computational cost increases linearly with the number of cloud columns, it will be investigated if there is an optimal number of columns beyond which there is little additional benefit to be gained. During daytime the high wave number channels of AIRS are affected by solar radiation. With full scattering calculations using a monochromatic version of the Edwards-Slingo radiation code the HT-FRTC can model solar radiation reasonably well, but full scattering calculations are relatively expensive. Pure Chou scaling on the other hand can not properly describe scattering of solar radiation by clouds and requires additional refinements.

  2. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  3. Post annealing performance evaluation of printable interdigital capacitive sensors by principal component analysis

    KAUST Repository

    Zia, Asif Iqbal

    2015-06-01

    The surface roughness of thin-film gold electrodes induces instability in impedance spectroscopy measurements of capacitive interdigital printable sensors. Post-fabrication thermodynamic annealing was carried out at temperatures ranging from 30 °C to 210 °C in a vacuum oven and the variation in surface morphology of thin-film gold electrodes was observed by scanning electron microscopy. Impedance spectra obtained at different temperatures were translated into equivalent circuit models by applying complex nonlinear least square curve-fitting algorithm. Principal component analysis was applied to deduce the classification of the parameters affected due to the annealing process and to evaluate the performance stability using mathematical model. Physics of the thermodynamic annealing was discussed based on the surface activation energies. The post anneal testing of the sensors validated the achieved stability in impedance measurement. © 2001-2012 IEEE.

  4. Post annealing performance evaluation of printable interdigital capacitive sensors by principal component analysis

    KAUST Repository

    Zia, Asif Iqbal; Mukhopadhyay, Subhas Chandra; Yu, Paklam; Al-Bahadly, Ibrahim H.; Gooneratne, Chinthaka Pasan; Kosel, Jü rgen

    2015-01-01

    The surface roughness of thin-film gold electrodes induces instability in impedance spectroscopy measurements of capacitive interdigital printable sensors. Post-fabrication thermodynamic annealing was carried out at temperatures ranging from 30 °C to 210 °C in a vacuum oven and the variation in surface morphology of thin-film gold electrodes was observed by scanning electron microscopy. Impedance spectra obtained at different temperatures were translated into equivalent circuit models by applying complex nonlinear least square curve-fitting algorithm. Principal component analysis was applied to deduce the classification of the parameters affected due to the annealing process and to evaluate the performance stability using mathematical model. Physics of the thermodynamic annealing was discussed based on the surface activation energies. The post anneal testing of the sensors validated the achieved stability in impedance measurement. © 2001-2012 IEEE.

  5. Analysis of Moisture Content in Beetroot using Fourier Transform Infrared Spectroscopy and by Principal Component Analysis.

    Science.gov (United States)

    Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah

    2018-05-22

    The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.

  6. Classification of soil samples according to their geographic origin using gamma-ray spectrometry and principal component analysis

    International Nuclear Information System (INIS)

    Dragovic, Snezana; Onjia, Antonije

    2006-01-01

    A principal component analysis (PCA) was used for classification of soil samples from different locations in Serbia and Montenegro. Based on activities of radionuclides ( 226 Ra, 238 U, 235 U, 4 K, 134 Cs, 137 Cs, 232 Th and 7 Be) detected by gamma-ray spectrometry, the classification of soils according to their geographical origin was performed. Application of PCA to our experimental data resulted in satisfactory classification rate (86.0% correctly classified samples). The obtained results indicate that gamma-ray spectrometry in conjunction with PCA is a viable tool for soil classification

  7. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    Science.gov (United States)

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  8. Use of Principal Components Analysis and Kriging to Predict Groundwater-Sourced Rural Drinking Water Quality in Saskatchewan.

    Science.gov (United States)

    McLeod, Lianne; Bharadwaj, Lalita; Epp, Tasha; Waldner, Cheryl L

    2017-09-15

    Groundwater drinking water supply surveillance data were accessed to summarize water quality delivered as public and private water supplies in southern Saskatchewan as part of an exposure assessment for epidemiologic analyses of associations between water quality and type 2 diabetes or cardiovascular disease. Arsenic in drinking water has been linked to a variety of chronic diseases and previous studies have identified multiple wells with arsenic above the drinking water standard of 0.01 mg/L; therefore, arsenic concentrations were of specific interest. Principal components analysis was applied to obtain principal component (PC) scores to summarize mixtures of correlated parameters identified as health standards and those identified as aesthetic objectives in the Saskatchewan Drinking Water Quality Standards and Objective. Ordinary, universal, and empirical Bayesian kriging were used to interpolate arsenic concentrations and PC scores in southern Saskatchewan, and the results were compared. Empirical Bayesian kriging performed best across all analyses, based on having the greatest number of variables for which the root mean square error was lowest. While all of the kriging methods appeared to underestimate high values of arsenic and PC scores, empirical Bayesian kriging was chosen to summarize large scale geographic trends in groundwater-sourced drinking water quality and assess exposure to mixtures of trace metals and ions.

  9. The Principal's Role in Site-Based Management.

    Science.gov (United States)

    Drury, William R.

    1993-01-01

    In existing school-based management models, the principal's role ranges from chairing the local council to being a coach/facilitator. With teachers and parents assuming greater control over governance, curriculum, and budgeting, paranoid principals may establish more formal bargaining relationships with district boards. Caution is advised, because…

  10. Pipeline monitoring using acoustic principal component analysis recognition with the Mel scale

    International Nuclear Information System (INIS)

    Wan, Chunfeng; Mita, Akira

    2009-01-01

    In modern cities, many important pipelines are laid underground. In order to prevent these lifeline infrastructures from accidental damage, monitoring systems are becoming indispensable. Third party activities were shown by recent reports to be a major cause of pipeline damage. Potential damage threat to the pipeline can be identified by detecting dangerous construction equipment nearby by studying the surrounding noise. Sound recognition technologies are used to identify them by their sounds, which can easily be captured by small sensors deployed along the pipelines. Pattern classification methods based on principal component analysis (PCA) were used to recognize the sounds from road cutters. In this paper, a Mel residual, i.e. the PCA residual in the Mel scale, is proposed to be the recognition feature. Determining if a captured sound belongs to a road cutter only requires checking how large its Mel residual is. Experiments were conducted and results showed that the proposed Mel-residual-based PCA recognition worked very well. The proposed Mel PCA residual recognition method will be very useful for pipeline monitoring systems to prevent accidental breakage and to ensure the safety of underground lifeline infrastructures

  11. Competition analysis on the operating system market using principal component analysis

    Directory of Open Access Journals (Sweden)

    Brătucu, G.

    2011-01-01

    Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.

  12. Classification of calcium supplements through application of principal component analysis: a study by inaa and aas

    International Nuclear Information System (INIS)

    Waheed, S.; Rahman, S.; Siddique, N.

    2013-01-01

    Different types of Ca supplements are available in the local markets of Pakistan. It is sometimes difficult to classify these with respect to their composition. In the present work principal component analysis (PCA) technique was applied to classify different Ca supplements on the basis of their elemental data obtained using instrumental neutron activation analysis (INAA) and atomic absorption spectrometry (AAS) techniques. The graphical representation of principal component analysis (PCA) scores utilizing intricate analytical data successfully generated four different types of Ca supplements with compatible samples grouped together. These included Ca supplements with CaCO/sub 3/as Ca source along with vitamin C, the supplements with CaCO/sub 3/ as Ca source along with vitamin D, Supplements with Ca from bone meal and supplements with chelated calcium. (author)

  13. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    Science.gov (United States)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  14. Zero drift and solid Earth tide extracted from relative gravimetric data with principal component analysis

    OpenAIRE

    Hongjuan Yu; Jinyun Guo; Jiulong Li; Dapeng Mu; Qiaoli Kong

    2015-01-01

    Zero drift and solid Earth tide corrections to static relative gravimetric data cannot be ignored. In this paper, a new principal component analysis (PCA) algorithm is presented to extract the zero drift and the solid Earth tide, as signals, from static relative gravimetric data assuming that the components contained in the relative gravimetric data are uncorrelated. Static relative gravity observations from Aug. 15 to Aug. 23, 2014 are used as statistical variables to separate the signal and...

  15. Sustainability Assessment of the Natural Gas Industry in China Using Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiucheng Dong

    2015-05-01

    Full Text Available Under pressure toward carbon emission reduction and air protection, China has accelerated energy restructuring by greatly improving the supply and consumption of natural gas in recent years. However, several issues with the sustainable development of the natural gas industry in China still need in-depth discussion. Therefore, based on the fundamental ideas of sustainable development, industrial development theories and features of the natural gas industry, a sustainable development theory is proposed in this thesis. The theory consists of five parts: resource, market, enterprise, technology and policy. The five parts, which unite for mutual connection and promotion, push the gas industry’s development forward together. Furthermore, based on the theoretical structure, the Natural Gas Industry Sustainability Index in China is established and evaluated via the Principal Component Analysis (PCA method. Finally, a conclusion is reached: that the sustainability of the natural gas industry in China kept rising from 2008 to 2013, mainly benefiting from increasing supply and demand, the enhancement of enterprise profits, technological innovation, policy support and the optimization and reformation of the gas market.

  16. Real-time myoelectric control of a multi-fingered hand prosthesis using principal components analysis

    Directory of Open Access Journals (Sweden)

    Matrone Giulia C

    2012-06-01

    Full Text Available Abstract Background In spite of the advances made in the design of dexterous anthropomorphic hand prostheses, these sophisticated devices still lack adequate control interfaces which could allow amputees to operate them in an intuitive and close-to-natural way. In this study, an anthropomorphic five-fingered robotic hand, actuated by six motors, was used as a prosthetic hand emulator to assess the feasibility of a control approach based on Principal Components Analysis (PCA, specifically conceived to address this problem. Since it was demonstrated elsewhere that the first two principal components (PCs can describe the whole hand configuration space sufficiently well, the controller here employed reverted the PCA algorithm and allowed to drive a multi-DoF hand by combining a two-differential channels EMG input with these two PCs. Hence, the novelty of this approach stood in the PCA application for solving the challenging problem of best mapping the EMG inputs into the degrees of freedom (DoFs of the prosthesis. Methods A clinically viable two DoFs myoelectric controller, exploiting two differential channels, was developed and twelve able-bodied participants, divided in two groups, volunteered to control the hand in simple grasp trials, using forearm myoelectric signals. Task completion rates and times were measured. The first objective (assessed through one group of subjects was to understand the effectiveness of the approach; i.e., whether it is possible to drive the hand in real-time, with reasonable performance, in different grasps, also taking advantage of the direct visual feedback of the moving hand. The second objective (assessed through a different group was to investigate the intuitiveness, and therefore to assess statistical differences in the performance throughout three consecutive days. Results Subjects performed several grasp, transport and release trials with differently shaped objects, by operating the hand with the myoelectric

  17. Radiation dose effects, hardening of electronic components

    International Nuclear Information System (INIS)

    Dupont-Nivet, E.

    1991-01-01

    This course reviews the mechanism of interaction between ionizing radiation and a silicon oxide type dielectric, in particular the effect of electron-hole pairs creation in the material. Then effects of cumulated dose on electronic components and especially in MOS technology are examined. Finally methods hardening of these components are exposed. 93 refs

  18. Application of time series analysis on molecular dynamics simulations of proteins: a study of different conformational spaces by principal component analysis.

    Science.gov (United States)

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C

    2004-09-08

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics

  19. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    International Nuclear Information System (INIS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Liang, Jimin; Tian, Jie; Cao, Feng

    2014-01-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance. (paper)

  20. Assessment of genetic divergence in tomato through agglomerative hierarchical clustering and principal component analysis

    International Nuclear Information System (INIS)

    Iqbal, Q.; Saleem, M.Y.; Hameed, A.; Asghar, M.

    2014-01-01

    For the improvement of qualitative and quantitative traits, existence of variability has prime importance in plant breeding. Data on different morphological and reproductive traits of 47 tomato genotypes were analyzed for correlation,agglomerative hierarchical clustering and principal component analysis (PCA) to select genotypes and traits for future breeding program. Correlation analysis revealed significant positive association between yield and yield components like fruit diameter, single fruit weight and number of fruits plant-1. Principal component (PC) analysis depicted first three PCs with Eigen-value higher than 1 contributing 81.72% of total variability for different traits. The PC-I showed positive factor loadings for all the traits except number of fruits plant-1. The contribution of single fruit weight and fruit diameter was highest in PC-1. Cluster analysis grouped all genotypes into five divergent clusters. The genotypes in cluster-II and cluster-V exhibited uniform maturity and higher yield. The D2 statistics confirmed highest distance between cluster- III and cluster-V while maximum similarity was observed in cluster-II and cluster-III. It is therefore suggested that crosses between genotypes of cluster-II and cluster-V with those of cluster-I and cluster-III may exhibit heterosis in F1 for hybrid breeding and for selection of superior genotypes in succeeding generations for cross breeding programme. (author)

  1. Impact of different conditions on accuracy of five rules for principal components retention

    Directory of Open Access Journals (Sweden)

    Zorić Aleksandar

    2013-01-01

    Full Text Available Polemics about criteria for nontrivial principal components are still present in the literature. Finding of a lot of papers, is that the most frequently used Guttman Kaiser’s criterion has very poor performance. In the last three years some new criteria were proposed. In this Monte Carlo experiment we aimed to investigate the impact that sample size, number of analyzed variables, number of supposed factors and proportion of error variance have on the accuracy of analyzed criteria for principal components retention. We compared the following criteria: Bartlett’s χ2 test, Horn’s Parallel Analysis, Guttman-Kaiser’s eigenvalue over one, Velicer’s MAP and CHull originally proposed by Ceulemans & Kiers. Factors were systematically combined resulting in 690 different combinations. A total of 138,000 simulations were performed. Novelty in this research is systematic variation of the error variance. Performed simulations showed that, in favorable research conditions, all analyzed criteria work properly. Bartlett’s and Horns criterion expressed the robustness in most of analyzed situations. Velicer’s MAP had the best accuracy in situations with small number of subjects and high number of variables. Results confirm earlier findings of Guttman-Kaiser’s criterion having the worse performance.

  2. Understanding deformation mechanisms during powder compaction using principal component analysis of compression data.

    Science.gov (United States)

    Roopwani, Rahul; Buckner, Ira S

    2011-10-14

    Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. A Principal Component Analysis of Project Management Construction Industry Competencies for the Ghanaian

    Directory of Open Access Journals (Sweden)

    Rockson Dobgegah

    2011-03-01

    Full Text Available The study adopts a data reduction technique to examine the presence of any complex structure among a set of project management competency variables. A structured survey questionnaire was administered to 100 project managers to elicit relevant data, and this achieved a relatively high response rate of 54%. After satisfying all the necessary tests of reliability of the survey instrument, sample size adequacy and population matrix, the data was subjected to principal component analysis, resulting in the identification of six new thematic project management competency areas ; and were explained in terms of human resource management and project control; construction innovation and communication; project financial resources management; project risk and quality management; business ethics and; physical resources and procurement management. These knowledge areas now form the basis for lateral project management training requirements in the context of the Ghanaian construction industry. Key contribution of the paper is manifested in the use of the principal component analysis, which has rigorously provided understanding into the complex structure and the relationship between the various knowledge areas. The originality and value of the paper is embedded in the use of contextual-task conceptual knowledge to expound the six uncorrelated empirical utility of the project management competencies.

  4. Multi-point accelerometric detection and principal component analysis of heart sounds

    International Nuclear Information System (INIS)

    De Panfilis, S; Peccianti, M; Chiru, O M; Moroni, C; Vashkevich, V; Parisi, G; Cassone, R

    2013-01-01

    Heart sounds are a fundamental physiological variable that provide a unique insight into cardiac semiotics. However a deterministic and unambiguous association between noises in cardiac dynamics is far from being accomplished yet due to many and different overlapping events which contribute to the acoustic emission. The current computer-based capacities in terms of signal detection and processing allow one to move from the standard cardiac auscultation, even in its improved forms like electronic stethoscopes or hi-tech phonocardiography, to the extraction of information on the cardiac activity previously unexplored. In this report, we present a new equipment for the detection of heart sounds, based on a set of accelerometric sensors placed in contact with the chest skin on the precordial area, and are able to measure simultaneously the vibration induced on the chest surface by the heart's mechanical activity. By utilizing advanced algorithms for the data treatment, such as wavelet decomposition and principal component analysis, we are able to condense the spatially extended acoustic information and to provide a synthetical representation of the heart activity. We applied our approach to 30 adults, mixed per gender, age and healthiness, and correlated our results with standard echocardiographic examinations. We obtained a 93% concordance rate with echocardiography between healthy and unhealthy hearts, including minor abnormalities such as mitral valve prolapse. (fast track communication)

  5. Relationships between Association of Research Libraries (ARL) Statistics and Bibliometric Indicators: A Principal Components Analysis

    Science.gov (United States)

    Hendrix, Dean

    2010-01-01

    This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…

  6. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  7. An application of principal component analysis to the clavicle and clavicle fixation devices.

    LENUS (Irish Health Repository)

    Daruwalla, Zubin J

    2010-01-01

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories.

  8. The impact of microwave stray radiation to in-vessel diagnostic components

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, M.; Laqua, H. P.; Hathiramani, D.; Baldzuhn, J.; Biedermann, C.; Cardella, A.; Erckmann, V.; König, R.; Köppen, M.; Zhang, D. [Max-Planck-Institut für Plasmaphysik, Teilinstitut Greifswald, EURATOM Association, D-17489 Greifswald (Germany); Oosterbeek, J.; Brand, H. von der; Parquay, S. [Technische Universiteit Eindhoven, department Technische Natuurkunde, working group for Plasma Physics and Radiation Technology, Den Doelch 2, 5612 AZ Eindhoven (Netherlands); Jimenez, R. [Centro de Investigationes Energeticas, Medioambientales y Technológicas, Association EURATOM/CIEMAT, Avenida Complutense 22, Madrid 28040 (Spain); Collaboration: W7-X Teasm

    2014-08-21

    Microwave stray radiation resulting from unabsorbed multiple reflected ECRH / ECCD beams may cause severe heating of microwave absorbing in-vessel components such as gaskets, bellows, windows, ceramics and cable insulations. In view of long-pulse operation of WENDELSTEIN-7X the MIcrowave STray RAdiation Launch facility, MISTRAL, allows to test in-vessel components in the environment of isotropic 140 GHz microwave radiation at power load of up to 50 kW/m{sup 2} over 30 min. The results show that both, sufficient microwave shielding measures and cooling of all components are mandatory. If shielding/cooling measures of in-vessel diagnostic components are not efficient enough, the level of stray radiation may be (locally) reduced by dedicated absorbing ceramic coatings on cooled structures.

  9. Clustering of metabolic and cardiovascular risk factors in the polycystic ovary syndrome: a principal component analysis.

    Science.gov (United States)

    Stuckey, Bronwyn G A; Opie, Nicole; Cussons, Andrea J; Watts, Gerald F; Burke, Valerie

    2014-08-01

    Polycystic ovary syndrome (PCOS) is a prevalent condition with heterogeneity of clinical features and cardiovascular risk factors that implies multiple aetiological factors and possible outcomes. To reduce a set of correlated variables to a smaller number of uncorrelated and interpretable factors that may delineate subgroups within PCOS or suggest pathogenetic mechanisms. We used principal component analysis (PCA) to examine the endocrine and cardiometabolic variables associated with PCOS defined by the National Institutes of Health (NIH) criteria. Data were retrieved from the database of a single clinical endocrinologist. We included women with PCOS (N = 378) who were not taking the oral contraceptive pill or other sex hormones, lipid lowering medication, metformin or other medication that could influence the variables of interest. PCA was performed retaining those factors with eigenvalues of at least 1.0. Varimax rotation was used to produce interpretable factors. We identified three principal components. In component 1, the dominant variables were homeostatic model assessment (HOMA) index, body mass index (BMI), high density lipoprotein (HDL) cholesterol and sex hormone binding globulin (SHBG); in component 2, systolic blood pressure, low density lipoprotein (LDL) cholesterol and triglycerides; in component 3, total testosterone and LH/FSH ratio. These components explained 37%, 13% and 11% of the variance in the PCOS cohort respectively. Multiple correlated variables from patients with PCOS can be reduced to three uncorrelated components characterised by insulin resistance, dyslipidaemia/hypertension or hyperandrogenaemia. Clustering of risk factors is consistent with different pathogenetic pathways within PCOS and/or differing cardiometabolic outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. An application of principal component analysis to the clavicle and clavicle fixation devices

    OpenAIRE

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-01-01

    Abstract Background Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and method...

  11. Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data.

    Science.gov (United States)

    Medina, José M; Díaz, José A; Vukusic, Pete

    2015-04-20

    Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.

  12. Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification.

    Science.gov (United States)

    Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung

    2015-08-01

    An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.

  13. Principal component analysis of 1/fα noise

    International Nuclear Information System (INIS)

    Gao, J.B.; Cao Yinhe; Lee, J.-M.

    2003-01-01

    Principal component analysis (PCA) is a popular data analysis method. One of the motivations for using PCA in practice is to reduce the dimension of the original data by projecting the raw data onto a few dominant eigenvectors with large variance (energy). Due to the ubiquity of 1/f α noise in science and engineering, in this Letter we study the prototypical stochastic model for 1/f α processes--the fractional Brownian motion (fBm) processes using PCA, and find that the eigenvalues from PCA of fBm processes follow a power-law, with the exponent being the key parameter defining the fBm processes. We also study random-walk-type processes constructed from DNA sequences, and find that the eigenvalue spectrum from PCA of those random-walk processes also follow power-law relations, with the exponent characterizing the correlation structures of the DNA sequence. In fact, it is observed that PCA can automatically remove linear trends induced by patchiness in the DNA sequence, hence, PCA has a similar capability to the detrended fluctuation analysis. Implications of the power-law distributed eigenvalue spectrum are discussed

  14. Modelling Monthly Mental Sickness Cases Using Principal ...

    African Journals Online (AJOL)

    The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...

  15. Identifikasi Wajah Manusia untuk Sistem Monitoring Kehadiran Perkuliahan menggunakan Ekstraksi Fitur Principal Component Analysis (PCA

    Directory of Open Access Journals (Sweden)

    Cucu Suhery

    2017-04-01

    Full Text Available Berbagai sistem monitoring presensi yang ada memiliki kekurangan dan kelebihan masing-masing, dan perlu  untuk terus dikembangkan sehingga memudahkan dalam proses pengolahan datanya. Pada penelitian ini dikembangkan suatu sistem monitoring presensi menggunakan deteksi wajah manusia yang diintegrasikan dengan basis data menggunakan bahasa pemrograman Python dan library opencv. Akuisisi data citra dilakukan dengan ponsel android, kemudian citra tersebut dideteksi dan dipotong sehingga hanya didapat bagian wajah saja.  Deteksi wajah menggunakan metode Haar-Cascade Classifier, kemudian ekstraksi fitur dilakukan menggunakan metode Principal Component Analysis (PCA. Hasil dari PCA diberi label sesuai dengan data manusia yang ada pada basis data. Semua citra yang telah memiliki nilai PCA dan tersimpan di basis data akan dicari kemiripannya dengan citra wajah pada proses pengujian menggunakan metoda Euclidian Distance. Pada penelitian ini basis data yang digunakan yaitu MySQL. Hasil deteksi citra wajah pada proses pelatihan memiliki tingkat keberhasilan 100% dan hasil identifikasi wajah pada proses pengujian memiliki tingkat keberhasilan 90%..   Kata kunci— android, haar-cascade classifier, principal component analysis, euclidian distance, MySQL, sistem monitoring presensi, deteksi wajah

  16. Principal Component Analysis to Explore Climatic Variability and Dengue Outbreak in Lahore

    Directory of Open Access Journals (Sweden)

    Syed Afrozuddin Ahmed

    2014-08-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan.  In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. Principal component analysis is performed for the purpose of finding if there is/are any general environmental factor/structure which could be affected in the emergence of dengue fever cases in Pakistani climate. Principal component is applied to find structure in data for all four periods i.e. 1980 to 2012, 1980 to 1995 and 1996 to 2012.  The first three PCs for the period (1980-2012, 1980-1994, 1995-2012 are almost the same and it represent hot and windy weather. The PC1s of all dengue periods are different to each other. PC2 for all period are same and it is wetness in weather. PC3s are different and it is the combination of wetness and windy weather. PC4s for all period show humid but no rain in weather. For climatic variable only minimum temperature and maximum temperature are significantly correlated with daily dengue cases.  PC1, PC3 and PC4 are highly significantly correlated with daily dengue cases 

  17. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    OpenAIRE

    HRISTIAN Liliana; OSTAFE Maria Magdalena; BORDEIANU Demetra Lacramioara; APOSTOL Laura Liliana

    2017-01-01

    The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA). There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A s...

  18. A National Radiation Oncology Medical Student Clerkship Survey: Didactic Curricular Components Increase Confidence in Clinical Competency

    Energy Technology Data Exchange (ETDEWEB)

    Jagadeesan, Vikrant S. [Department of Radiation and Cellular Oncology, Pritzker School of Medicine, University of Chicago, Chicago, Illinois (United States); Raleigh, David R. [Department of Radiation Oncology, School of Medicine, University of California–San Francisco, San Francisco, California (United States); Koshy, Matthew; Howard, Andrew R.; Chmura, Steven J. [Department of Radiation and Cellular Oncology, Pritzker School of Medicine, University of Chicago, Chicago, Illinois (United States); Golden, Daniel W., E-mail: dgolden@radonc.uchicago.edu [Department of Radiation and Cellular Oncology, Pritzker School of Medicine, University of Chicago, Chicago, Illinois (United States)

    2014-01-01

    Purpose: Students applying to radiation oncology residency programs complete 1 or more radiation oncology clerkships. This study assesses student experiences and perspectives during radiation oncology clerkships. The impact of didactic components and number of clerkship experiences in relation to confidence in clinical competency and preparation to function as a first-year radiation oncology resident are evaluated. Methods and Materials: An anonymous, Internet-based survey was sent via direct e-mail to all applicants to a single radiation oncology residency program during the 2012-2013 academic year. The survey was composed of 3 main sections including questions regarding baseline demographic information and prior radiation oncology experience, rotation experiences, and ideal clerkship curriculum content. Results: The survey response rate was 37% (70 of 188). Respondents reported 191 unique clerkship experiences. Of the respondents, 27% (19 of 70) completed at least 1 clerkship with a didactic component geared towards their level of training. Completing a clerkship with a didactic component was significantly associated with a respondent's confidence to function as a first-year radiation oncology resident (Wilcoxon rank–sum P=.03). However, the total number of clerkships completed did not correlate with confidence to pursue radiation oncology as a specialty (Spearman ρ P=.48) or confidence to function as a first year resident (Spearman ρ P=.43). Conclusions: Based on responses to this survey, rotating students perceive that the majority of radiation oncology clerkships do not have formal didactic curricula. Survey respondents who completed a clerkship with a didactic curriculum reported feeling more prepared to function as a radiation oncology resident. However, completing an increasing number of clerkships does not appear to improve confidence in the decision to pursue radiation oncology as a career or to function as a radiation oncology resident. These

  19. A national radiation oncology medical student clerkship survey: didactic curricular components increase confidence in clinical competency.

    Science.gov (United States)

    Jagadeesan, Vikrant S; Raleigh, David R; Koshy, Matthew; Howard, Andrew R; Chmura, Steven J; Golden, Daniel W

    2014-01-01

    Students applying to radiation oncology residency programs complete 1 or more radiation oncology clerkships. This study assesses student experiences and perspectives during radiation oncology clerkships. The impact of didactic components and number of clerkship experiences in relation to confidence in clinical competency and preparation to function as a first-year radiation oncology resident are evaluated. An anonymous, Internet-based survey was sent via direct e-mail to all applicants to a single radiation oncology residency program during the 2012-2013 academic year. The survey was composed of 3 main sections including questions regarding baseline demographic information and prior radiation oncology experience, rotation experiences, and ideal clerkship curriculum content. The survey response rate was 37% (70 of 188). Respondents reported 191 unique clerkship experiences. Of the respondents, 27% (19 of 70) completed at least 1 clerkship with a didactic component geared towards their level of training. Completing a clerkship with a didactic component was significantly associated with a respondent's confidence to function as a first-year radiation oncology resident (Wilcoxon rank-sum P=.03). However, the total number of clerkships completed did not correlate with confidence to pursue radiation oncology as a specialty (Spearman ρ P=.48) or confidence to function as a first year resident (Spearman ρ P=.43). Based on responses to this survey, rotating students perceive that the majority of radiation oncology clerkships do not have formal didactic curricula. Survey respondents who completed a clerkship with a didactic curriculum reported feeling more prepared to function as a radiation oncology resident. However, completing an increasing number of clerkships does not appear to improve confidence in the decision to pursue radiation oncology as a career or to function as a radiation oncology resident. These results support further development of structured didactic

  20. A National Radiation Oncology Medical Student Clerkship Survey: Didactic Curricular Components Increase Confidence in Clinical Competency

    International Nuclear Information System (INIS)

    Jagadeesan, Vikrant S.; Raleigh, David R.; Koshy, Matthew; Howard, Andrew R.; Chmura, Steven J.; Golden, Daniel W.

    2014-01-01

    Purpose: Students applying to radiation oncology residency programs complete 1 or more radiation oncology clerkships. This study assesses student experiences and perspectives during radiation oncology clerkships. The impact of didactic components and number of clerkship experiences in relation to confidence in clinical competency and preparation to function as a first-year radiation oncology resident are evaluated. Methods and Materials: An anonymous, Internet-based survey was sent via direct e-mail to all applicants to a single radiation oncology residency program during the 2012-2013 academic year. The survey was composed of 3 main sections including questions regarding baseline demographic information and prior radiation oncology experience, rotation experiences, and ideal clerkship curriculum content. Results: The survey response rate was 37% (70 of 188). Respondents reported 191 unique clerkship experiences. Of the respondents, 27% (19 of 70) completed at least 1 clerkship with a didactic component geared towards their level of training. Completing a clerkship with a didactic component was significantly associated with a respondent's confidence to function as a first-year radiation oncology resident (Wilcoxon rank–sum P=.03). However, the total number of clerkships completed did not correlate with confidence to pursue radiation oncology as a specialty (Spearman ρ P=.48) or confidence to function as a first year resident (Spearman ρ P=.43). Conclusions: Based on responses to this survey, rotating students perceive that the majority of radiation oncology clerkships do not have formal didactic curricula. Survey respondents who completed a clerkship with a didactic curriculum reported feeling more prepared to function as a radiation oncology resident. However, completing an increasing number of clerkships does not appear to improve confidence in the decision to pursue radiation oncology as a career or to function as a radiation oncology resident. These results

  1. Radiation-resistant beamline components at LAMPF

    International Nuclear Information System (INIS)

    Macek, R.J.; Grisham, D.L.; Lambert, J.e.; Werbeck, R.

    1983-01-01

    A variety of highly radiation-resistant beamline components have been successfully developed at LAMPF primarily for use in the target cells and beam stop area of the intense proton beamline. Design features and operating experience are reviewed for magnets, instrumentation, targets, vacuum seals, vacuum windows, collimators, and beam stops

  2. Test and evaluation of semiconductor components in mixed field radiation monitoring

    International Nuclear Information System (INIS)

    Cardenas, Jose Patricio N.; Madi Filho, Tufic; Rodrigues, Leticia L.C.

    2009-01-01

    Silicon components have found extensive use in nuclear spectroscopy and counting, as described in many articles in the last three decades. These devices have found utility in radiation dosimetry because a diode, for instance, produces a current approximately 18000 times higher than any ionization chamber of equal sensitive volume. This reduces stringent requirements from the electronics used to amplify or integrate the current and / or allows approaching the ideal detector point for the mapping of radiation fields. For better performance, in the case of diodes, they are normally used with high inverse polarity to obtain a deeper barrier, less noise and shorter transit time. The aim of this work was the evaluation of these semiconductor components for application in ionizing radiation fields monitoring, in nuclear research reactors and radiotherapy facilities, for radiation protection and health physics purposes. Experimental configurations to analyze the performance of commercial semiconductors, such as silicon PIN Photodiodes and Silicon Surface Barrier Detectors, were developed and the performance of three different configurations of charge preamplifier with silicon components was also studied. Components were evaluated for application as neutron detectors, using some types of radiators (converters). The radiation response of these silicon components to neutron fields from nuclear research reactors IEA-R1 and IPEN-MB1 (thermal, epithermal and fast neutrons), from beam holes, experimental halls and AmBe neutron sources in laboratory was investigated. (author)

  3. Dynamic Principal Component Analysis with Nonoverlapping Moving Window and Its Applications to Epileptic EEG Classification

    Directory of Open Access Journals (Sweden)

    Shengkun Xie

    2014-01-01

    Full Text Available Classification of electroencephalography (EEG is the most useful diagnostic and monitoring procedure for epilepsy study. A reliable algorithm that can be easily implemented is the key to this procedure. In this paper a novel signal feature extraction method based on dynamic principal component analysis and nonoverlapping moving window is proposed. Along with this new technique, two detection methods based on extracted sparse features are applied to deal with signal classification. The obtained results demonstrated that our proposed methodologies are able to differentiate EEGs from controls and interictal for epilepsy diagnosis and to separate EEGs from interictal and ictal for seizure detection. Our approach yields high classification accuracy for both single-channel short-term EEGs and multichannel long-term EEGs. The classification performance of the method is also compared with other state-of-the-art techniques on the same datasets and the effect of signal variability on the presented methods is also studied.

  4. Characterization and Discrimination of Gram-Positive Bacteria Using Raman Spectroscopy with the Aid of Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Alia Colniță

    2017-09-01

    Full Text Available Raman scattering and its particular effect, surface-enhanced Raman scattering (SERS, are whole-organism fingerprinting spectroscopic techniques that gain more and more popularity in bacterial detection. In this work, two relevant Gram-positive bacteria species, Lactobacillus casei (L. casei and Listeria monocytogenes (L. monocytogenes were characterized based on their Raman and SERS spectral fingerprints. The SERS spectra were used to identify the biochemical structures of the bacterial cell wall. Two synthesis methods of the SERS-active nanomaterials were used and the recorded spectra were analyzed. L. casei and L. monocytogenes were successfully discriminated by applying Principal Component Analysis (PCA to their specific spectral data.

  5. Principals Of Radiation Toxicology: Important Aspects.

    Science.gov (United States)

    Popov, Dmitri; Maliev, Slava; Jones, Jeffrey

    “All things are poison, and nothing is without poison; only the dose permits something not to be poisonous.” Paracelsus Key Words: Radiation Toxins (RT), Radiation Toxicants (RTc), Radiation Poisons (RP), Radiation Exposure (RE), Radiation Toxicology is the science about radiation poisons. [D.Popov et al. 2012,J.Zhou et al. 2007,] Radiation Toxins is a specific proteins with high enzymatic activity produced by living irradiated mammals. [D.Popov et al. 2012,] Radiation Toxicants is a substances that produce radiomimetics effects, adverse biological effects which specific for radiation. [D.Popov et al. 2012,] Radiation Toxic agent is specific proteins that can produce pathological biological effects specific for physical form of radiation.[D.Popov et al. 1990,2012,V. Maliev 2007] Different Toxic Substances isolated from cells or from blood or lymph circulation. [Kudriashov I. et al. 1970, D.Popov et al. 1990,2012,V. Maliev et al. 2007,] Radiation Toxins may affects many organs or specific organ, tissue, specific group of cells. [Kudriashov I. et al. 1970, D.Popov et al. 1990,2012,V. Maliev et al. 2007] For example: Radiation Toxins could induce collective toxic clinical states to include: systemic inflammatory response syndrome (SIRS),toxic multiple organ injury (TMOI), toxic multiple organ dysfunction syndromes (TMODS),and finally, toxic multiple organ failure (TMOF). [T. Azizova et al. 2005, Konchalovsky et al., 2005, D. Popov et al 2012] However, Radiation Toxins could induce specific injury of organs or tissue and induce Acute Radiation Syndromes such as Acute Radiation Cerebrovascular Syndrome, Acute Radiation Cardiovascular Syndrome, Acute Radiation Hematopoietic Syndrome, Acute Radiation GastroIntestinal Syndrome. [ D.Popov et al. 1990, 2012, V. Maliev et al. 2007] Radiation Toxins correlates with Radiation Exposure and the dose-response relationship is a fundamental and essential concept in classic Toxicology and Radiation Toxicology.[ D.Popov et al

  6. [Principal component analysis and cluster analysis of inorganic elements in sea cucumber Apostichopus japonicus].

    Science.gov (United States)

    Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie

    2011-11-01

    The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.

  7. Radiation effects on superconducting fusion magnet components

    International Nuclear Information System (INIS)

    Weber, H.W.

    2011-01-01

    Nuclear fusion devices based on the magnetic confinement principle heavily rely on the existence and performance of superconducting magnets and have always significantly contributed to advancing superconductor and magnet technology to their limits. In view of the presently ongoing construction of the tokamak device ITER and the stellerator device Wendelstein 7X and their record breaking parameters concerning size, complexity of design, stored energy, amperage, mechanical and magnetic forces, critical current densities and stability requirements, it is deemed timely to review another critical parameter that is practically unique to these devices, namely the radiation response of all magnet components to the lifetime fluence of fast neutrons and gamma rays produced by the fusion reactions of deuterium and tritium. I will review these radiation effects in turn for the currently employed standard "technical" low temperature superconductors NbTi and Nb 3 Sn, the stabilizing material (Cu) as well as the magnet insulation materials and conclude by discussing the potential of high temperature superconducting materials for future generations of fusion devices, such as DEMO. (author)

  8. Geographic distribution of suicide and railway suicide in Belgium, 2008-2013: a principal component analysis.

    Science.gov (United States)

    Strale, Mathieu; Krysinska, Karolina; Overmeiren, Gaëtan Van; Andriessen, Karl

    2017-06-01

    This study investigated the geographic distribution of suicide and railway suicide in Belgium over 2008--2013 on local (i.e., district or arrondissement) level. There were differences in the regional distribution of suicide and railway suicides in Belgium over the study period. Principal component analysis identified three groups of correlations among population variables and socio-economic indicators, such as population density, unemployment, and age group distribution, on two components that helped explaining the variance of railway suicide at a local (arrondissement) level. This information is of particular importance to prevent suicides in high-risk areas on the Belgian railway network.

  9. Empirical evaluation of grouping of lower urinary tract symptoms: principal component analysis of Tampere Ageing Male Urological Study data.

    Science.gov (United States)

    Pöyhönen, Antti; Häkkinen, Jukka T; Koskimäki, Juha; Hakama, Matti; Tammela, Teuvo L J; Auvinen, Anssi

    2013-03-01

    WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The ICS has divided LUTS into three groups: storage, voiding and post-micturition symptoms. The classification is based on anatomical, physiological and urodynamic considerations of a theoretical nature. We used principal component analysis (PCA) to determine the inter-correlations of various LUTS, which is a novel approach to research and can strengthen existing knowledge of the phenomenology of LUTS. After we had completed our analyses, another study was published that used a similar approach and results were very similar to those of the present study. We evaluated the constellation of LUTS using PCA of the data from a population-based study that included >4000 men. In our analysis, three components emerged from the 12 LUTS: voiding, storage and incontinence components. Our results indicated that incontinence may be separate from the other storage symptoms and post-micturition symptoms should perhaps be regarded as voiding symptoms. To determine how lower urinary tract symptoms (LUTS) relate to each other and assess if the classification proposed by the International Continence Society (ICS) is consistent with empirical findings. The information on urinary symptoms for this population-based study was collected using a self-administered postal questionnaire in 2004. The questionnaire was sent to 7470 men, aged 30-80 years, from Pirkanmaa County (Finland), of whom 4384 (58.7%) returned the questionnaire. The Danish Prostatic Symptom Score-1 questionnaire was used to evaluate urinary symptoms. Principal component analysis (PCA) was used to evaluate the inter-correlations among various urinary symptoms. The PCA produced a grouping of 12 LUTS into three categories consisting of voiding, storage and incontinence symptoms. Post-micturition symptoms were related to voiding symptoms, but incontinence symptoms were separate from storage symptoms. In the analyses by age group, similar categorization was found at

  10. Kernel principal component analysis residual diagnosis (KPCARD): An automated method for cosmic ray artifact removal in Raman spectra

    International Nuclear Information System (INIS)

    Li, Boyan; Calvet, Amandine; Casamayou-Boucau, Yannick; Ryder, Alan G.

    2016-01-01

    A new, fully automated, rapid method, referred to as kernel principal component analysis residual diagnosis (KPCARD), is proposed for removing cosmic ray artifacts (CRAs) in Raman spectra, and in particular for large Raman imaging datasets. KPCARD identifies CRAs via a statistical analysis of the residuals obtained at each wavenumber in the spectra. The method utilizes the stochastic nature of CRAs; therefore, the most significant components in principal component analysis (PCA) of large numbers of Raman spectra should not contain any CRAs. The process worked by first implementing kernel PCA (kPCA) on all the Raman mapping data and second accurately estimating the inter- and intra-spectrum noise to generate two threshold values. CRA identification was then achieved by using the threshold values to evaluate the residuals for each spectrum and assess if a CRA was present. CRA correction was achieved by spectral replacement where, the nearest neighbor (NN) spectrum, most spectroscopically similar to the CRA contaminated spectrum and principal components (PCs) obtained by kPCA were both used to generate a robust, best curve fit to the CRA contaminated spectrum. This best fit spectrum then replaced the CRA contaminated spectrum in the dataset. KPCARD efficacy was demonstrated by using simulated data and real Raman spectra collected from solid-state materials. The results showed that KPCARD was fast ( 1 million) Raman datasets. - Highlights: • New rapid, automatable method for cosmic ray artifact correction of Raman spectra. • Uses combination of kernel PCA and noise estimation for artifact identification. • Implements a best fit spectrum replacement correction approach.

  11. A Principal Component Analysis of Skills and Competencies Required of Quantity Surveyors: Nigerian Perspective

    OpenAIRE

    Oluwasuji Dada, Joshua

    2014-01-01

    The purpose of this paper is to examine the intrinsic relationships among sets of quantity surveyors’ skill and competence variables with a view to reducing them into principal components. The research adopts a data reduction technique using factor analysis statistical technique. A structured questionnaire was administered among major stakeholders in the Nigerian construction industry. The respondents were asked to give rating, on a 5 point Likert scale, on skills and competencies re...

  12. A New Model for Birth Weight Prediction Using 2- and 3-Dimensional Ultrasonography by Principal Component Analysis: A Chinese Population Study.

    Science.gov (United States)

    Liao, Shuxin; Wang, Yunfang; Xiao, Shufang; Deng, Xujie; Fang, Bimei; Yang, Fang

    2018-03-30

    To establish a new model for birth weight prediction using 2- and 3-dimensional ultrasonography (US) by principal component analysis (PCA). Two- and 3-dimensional US was prospectively performed in women with normal singleton pregnancies within 7 days before delivery (37-41 weeks' gestation). The participants were divided into a development group (n = 600) and a validation group (n = 597). Principal component analysis and stepwise linear regression analysis were used to develop a new prediction model. The new model's accuracy in predicting fetal birth weight was confirmed by the validation group through comparisons with previously published formulas. A total of 1197 cases were recruited in this study. All interclass and intraclass correlation coefficients of US measurements were greater than 0.75. Two principal components (PCs) were considered primary in determining estimated fetal birth weight, which were derived from 9 US measurements. Stepwise linear regression analysis showed a positive association between birth weight and PC1 and PC2. In the development group, our model had a small mean percentage error (mean ± SD, 3.661% ± 3.161%). At least a 47.558% decrease in the mean percentage error and a 57.421% decrease in the standard deviation of the new model compared with previously published formulas were noted. The results were similar to those in the validation group, and the new model covered 100% of birth weights within 10% of actual birth weights. The birth weight prediction model based on 2- and 3-dimensional US by PCA could help improve the precision of estimated fetal birth weight. © 2018 by the American Institute of Ultrasound in Medicine.

  13. NMR-based Metabolomics Analysis of Liver from C57BL/6 Mouse Exposed to Ionizing Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Xiongjie [Pacific Northwest National Laboratory, Richland, Washington 99352; State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, the Chinese Academy of Sciences, Wuhan, 430071, PR China; University of Chinese Academy of Sciences, Beijing 100049, China; Hu, Mary [Pacific Northwest National Laboratory, Richland, Washington 99352; Zhang, Xu [State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, the Chinese Academy of Sciences, Wuhan, 430071, PR China; Hu, Jian Zhi [Pacific Northwest National Laboratory, Richland, Washington 99352

    2017-07-01

    The health effects of exposing to ionizing radiation are attracting great interest in the space exploration community and patients considering radiotherapy. However, the impact to metabolism after exposure to high dose radiation has not yet been clearly defined in livers. In the present study, 1H nuclear magnetic resonance (NMR) based metabolomics combined with multivariate data analysis are applied to study the changes of metabolism in the liver of C57BL/6 mouse after whole body exposure to either gamma (3.0 and 7.8 Gy) or proton (3.0 Gy) radiation. Principal component analysis (PCA) and orthogonal projection to latent structures analysis (OPLS) are employed for classification and identification of potential biomarkers associated with gamma and proton irradiation. The results show that the radiation exposed groups can be well separated from the control group. At the same radiation dosage, the group exposed to proton radiation is well separated from the group exposed to gamma radiation, indicating different radiation sources induce different alterations based on metabolic profiling. Common to both gamma and proton radiation at the high radiation doses studied in this work, compared with the control groups the concentrations of choline, O-phosphocholine and trimethylamine N-oxide are decreased statistically, while those of glutamine, glutathione, malate, creatinine, phosphate, betaine and 4-hydroxyphenylacetate are statistically and significantly elevated after exposure to radiation. Since these altered metabolites are associated with multiple biological pathways, the changes suggest that the exposure to radiation induce abnormality in multiple biological pathways. In particular, metabolites such as 4-hydroxyphenylacetate, betaine, glutamine, choline and trimethylamine N-oxide may be good candidates of pre-diagnose biomarkers for ionizing radiation in liver.

  14. Evidence for age-associated disinhibition of the wake drive provided by scoring principal components of the resting EEG spectrum in sleep-provoking conditions.

    Science.gov (United States)

    Putilov, Arcady A; Donskaya, Olga G

    2016-01-01

    Age-associated changes in different bandwidths of the human electroencephalographic (EEG) spectrum are well documented, but their functional significance is poorly understood. This spectrum seems to represent summation of simultaneous influences of several sleep-wake regulatory processes. Scoring of its orthogonal (uncorrelated) principal components can help in separation of the brain signatures of these processes. In particular, the opposite age-associated changes were documented for scores on the two largest (1st and 2nd) principal components of the sleep EEG spectrum. A decrease of the first score and an increase of the second score can reflect, respectively, the weakening of the sleep drive and disinhibition of the opposing wake drive with age. In order to support the suggestion of age-associated disinhibition of the wake drive from the antagonistic influence of the sleep drive, we analyzed principal component scores of the resting EEG spectra obtained in sleep deprivation experiments with 81 healthy young adults aged between 19 and 26 and 40 healthy older adults aged between 45 and 66 years. At the second day of the sleep deprivation experiments, frontal scores on the 1st principal component of the EEG spectrum demonstrated an age-associated reduction of response to eyes closed relaxation. Scores on the 2nd principal component were either initially increased during wakefulness or less responsive to such sleep-provoking conditions (frontal and occipital scores, respectively). These results are in line with the suggestion of disinhibition of the wake drive with age. They provide an explanation of why older adults are less vulnerable to sleep deprivation than young adults.

  15. A Principal Component Analysis of 39 Scientific Impact Measures

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert

    2009-01-01

    Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078

  16. A principal component analysis of 39 scientific impact measures.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  17. A Principal Component Analysis/Fuzzy Comprehensive Evaluation for Rockburst Potential in Kimberlite

    Science.gov (United States)

    Pu, Yuanyuan; Apel, Derek; Xu, Huawei

    2018-02-01

    Kimberlite is an igneous rock which sometimes bears diamonds. Most of the diamonds mined in the world today are found in kimberlite ores. Burst potential in kimberlite has not been investigated, because kimberlite is mostly mined using open-pit mining, which poses very little threat of rock bursting. However, as the mining depth keeps increasing, the mines convert to underground mining methods, which can pose a threat of rock bursting in kimberlite. This paper focuses on the burst potential of kimberlite at a diamond mine in northern Canada. A combined model with the methods of principal component analysis (PCA) and fuzzy comprehensive evaluation (FCE) is developed to process data from 12 different locations in kimberlite pipes. Based on calculated 12 fuzzy evaluation vectors, 8 locations show a moderate burst potential, 2 locations show no burst potential, and 2 locations show strong and violent burst potential, respectively. Using statistical principles, a Mahalanobis distance is adopted to build a comprehensive fuzzy evaluation vector for the whole mine and the final evaluation for burst potential is moderate, which is verified by a practical rockbursting situation at mine site.

  18. Derivation of the reduced reaction mechanisms of ozone depletion events in the Arctic spring by using concentration sensitivity analysis and principal component analysis

    Directory of Open Access Journals (Sweden)

    L. Cao

    2016-12-01

    Full Text Available The ozone depletion events (ODEs in the springtime Arctic have been investigated since the 1980s. It is found that the depletion of ozone is highly associated with an auto-catalytic reaction cycle, which involves mostly the bromine-containing compounds. Moreover, bromide stored in various substrates in the Arctic such as the underlying surface covered by ice and snow can be also activated by the absorbed HOBr. Subsequently, this leads to an explosive increase of the bromine amount in the troposphere, which is called the “bromine explosion mechanism”. In the present study, a reaction scheme representing the chemistry of ozone depletion and halogen release is processed with two different mechanism reduction approaches, namely, the concentration sensitivity analysis and the principal component analysis. In the concentration sensitivity analysis, the interdependence of the mixing ratios of ozone and principal bromine species on the rate of each reaction in the ODE mechanism is identified. Furthermore, the most influential reactions in different time periods of ODEs are also revealed. By removing 11 reactions with the maximum absolute values of sensitivities lower than 10 %, a reduced reaction mechanism of ODEs is derived. The onsets of each time period of ODEs in simulations using the original reaction mechanism and the reduced reaction mechanism are identical while the maximum deviation of the mixing ratio of principal bromine species between different mechanisms is found to be less than 1 %. By performing the principal component analysis on an array of the sensitivity matrices, the dependence of a particular species concentration on a combination of the reaction rates in the mechanism is revealed. Redundant reactions are indicated by principal components corresponding to small eigenvalues and insignificant elements in principal components with large eigenvalues. Through this investigation, aside from the 11 reactions identified as

  19. Radiation-induced structural transitions in composite materials with strong interaction of polymer components

    International Nuclear Information System (INIS)

    Zaikin, Yu.A.; Koztaeva, U.P.

    2002-01-01

    In earlier papers the internal friction (IF) method was applied to studies of structural relaxation in different types of polymer-based composite materials (glass-cloth, paper-based and foiled laminates impregnated by epoxy and phenolic resins) irradiated by 2 MeV electrons in the dose range of 0.1-50.0 MGy. Selectivity and high sensibility of the internal friction method allowed to distinguish glassy transitions in different structural components of the composites. The relaxation processes observed were identified and attributed to structural alterations in the polymer filler, the binder and the boundary layers. It was shown that changes in the parameters of relaxation maximums during irradiation can be considered as quantitative characteristics for the degree of radiation-induced degradation or cross-linking of polymer molecules. This paper deals with specific features of IF spectra in paper-based laminates where both the filler fibers and the binder are strongly interacting polymers. Anisotropy of viscous and elastic properties is very weak for this kind of materials, so that IF measurements give nearly the same result independently on the filler fiber orientation in the sample. The main reasons for it are the rigid chain structure of fillers (polyethylene-terephthalate and cellulose) and the good adhesion strengthened by diffusion of the epoxy or phenolic binder to defect regions of the filler.The IF temperature dependence observed in paper-based laminates is represented by superposition of two very broad relaxation maximums associated with transitions from glassy to high-elastic state in structural components, each based on one of the polymers. The inflection points characteristic for IF temperature dependence in paper-based laminates give a reason to treat them as a superposition of α-peaks associated with transitions from glassy to high-elastic state in structural components of a composite based on the binder and the filler, respectively. Another

  20. Study of Seasonal Variation in Groundwater Quality of Sagar City (India by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Hemant Pathak

    2011-01-01

    Full Text Available Groundwater is one of the major resources of the drinking water in Sagar city (India.. In this study 15 sampling station were selected for the investigations on 14 chemical parameters. The work was carried out during different months of the pre-monsoon, monsoon and post-monsoon seasons in June 2009 to June 2010. The multivariate statistics such as principal component and cluster analysis were applied to the datasets to investigate seasonal variations in groundwater quality. Principal axis factoring has been used to observe the mode of association of parameters and their interrelationships, for evaluating water quality. Average value of BOD, COD, ammonia and iron was high during entire study period. Elevated values of BOD and ammonia in monsoon, slightly more value of BOD in post-monsoon, BOD, ammonia and iron in pre-monsoon period reflected contribution on temporal effect on groundwater. Results of principal component analysis evinced that all the parameters equally and significantly contribute to groundwater quality variations. Factor 1 and factor 2 analysis revealed the DO value deteriorate due to organic load (BOD/Ammonia in different seasons. Hierarchical cluster analysis grouped 15 stations into four clusters in monsoon, five clusters in post-monsoon and five clusters in pre-monsoon with similar water quality features. Clustered group at monsoon, post-monsoon and pre-monsoon consisted one station exhibiting significant spatial variation in physicochemical composition. The anthropogenic nitrogenous species, as fallout from modernization activities. The study indicated that the groundwater sufficiently well oxygenated and nutrient-rich in study places.

  1. Time-Frequency Data Reduction for Event Related Potentials: Combining Principal Component Analysis and Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Selin Aviyente

    2010-01-01

    Full Text Available Joint time-frequency representations offer a rich representation of event related potentials (ERPs that cannot be obtained through individual time or frequency domain analysis. This representation, however, comes at the expense of increased data volume and the difficulty of interpreting the resulting representations. Therefore, methods that can reduce the large amount of time-frequency data to experimentally relevant components are essential. In this paper, we present a method that reduces the large volume of ERP time-frequency data into a few significant time-frequency parameters. The proposed method is based on applying the widely used matching pursuit (MP approach, with a Gabor dictionary, to principal components extracted from the time-frequency domain. The proposed PCA-Gabor decomposition is compared with other time-frequency data reduction methods such as the time-frequency PCA approach alone and standard matching pursuit methods using a Gabor dictionary for both simulated and biological data. The results show that the proposed PCA-Gabor approach performs better than either the PCA alone or the standard MP data reduction methods, by using the smallest amount of ERP data variance to produce the strongest statistical separation between experimental conditions.

  2. Enhancement of Jahani (Firouzabad salt dome lithological units, using principal components analysis

    Directory of Open Access Journals (Sweden)

    Houshang Pourcaseb

    2016-04-01

    Full Text Available In this study, principal components analysis was used to investigate lithological characteristics of Jahani salt dome, Firouzabad. The spectral curves of rocks in the study area show that the evaporate rocks have the highest reflectance at specified wavelengths. The highest reflection has been seen in gypsum and white salt, while minimal reflection can be observed in the igneous rocks from the region. The results show that PCs have significantly low information. It is clear that PC1 shows more information in the highest variance while PC2 has less information. Regional geological map and field controls show compatibility between the enhanced zones and outcrops in the field.

  3. Determinants of Return on Assets in Romania: A Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Sorana Vatavu

    2015-03-01

    Full Text Available This paper examines the impact of capital structure, as well as its determinants on the financial performance of Romanian companies listed on the Bucharest Stock Exchange. The analysis is based on cross sectional regressions and factor analysis, and it refers to a ten-year period (2003-2012. Return on assets (ROA is the performance proxy, while the capital structure indicator is debt ratio. Regression results indicate that Romanian companies register higher returns when they operate with limited borrowings. Among the capital structure determinants, tangibility and business risk have a negative impact on ROA, but the level of taxation has a positive effect, showing that companies manage their assets more efficiently during times of higher fiscal pressure. Performance is sustained by sales turnover, but not significantly influenced by high levels of liquidity. Periods of unstable economic conditions, reflected by high inflation rates and the current financial crisis, have a strong negative impact on corporate performance. Based on regression results, three factors were considered through the method of iterated principal component factors: the first one incorporates debt and size, as an indicator of consumption, the second one integrates the influence of tangibility and liquidity, marking the investment potential, and the third one is an indicator of assessed risk, integrating the volatility of earnings with the level of taxation. ROA is significantly influenced by these three factors, regardless the regression method used. The consumption factor has a negative impact on performance, while the investment and risk variables positively influence ROA.

  4. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  5. Mining gene expression data by interpreting principal components

    Directory of Open Access Journals (Sweden)

    Mortazavi Ali

    2006-04-01

    Full Text Available Abstract Background There are many methods for analyzing microarray data that group together genes having similar patterns of expression over all conditions tested. However, in many instances the biologically important goal is to identify relatively small sets of genes that share coherent expression across only some conditions, rather than all or most conditions as required in traditional clustering; e.g. genes that are highly up-regulated and/or down-regulated similarly across only a subset of conditions. Equally important is the need to learn which conditions are the decisive ones in forming such gene sets of interest, and how they relate to diverse conditional covariates, such as disease diagnosis or prognosis. Results We present a method for automatically identifying such candidate sets of biologically relevant genes using a combination of principal components analysis and information theoretic metrics. To enable easy use of our methods, we have developed a data analysis package that facilitates visualization and subsequent data mining of the independent sources of significant variation present in gene microarray expression datasets (or in any other similarly structured high-dimensional dataset. We applied these tools to two public datasets, and highlight sets of genes most affected by specific subsets of conditions (e.g. tissues, treatments, samples, etc.. Statistically significant associations for highlighted gene sets were shown via global analysis for Gene Ontology term enrichment. Together with covariate associations, the tool provides a basis for building testable hypotheses about the biological or experimental causes of observed variation. Conclusion We provide an unsupervised data mining technique for diverse microarray expression datasets that is distinct from major methods now in routine use. In test uses, this method, based on publicly available gene annotations, appears to identify numerous sets of biologically relevant genes. It

  6. Application of principal component analysis and information fusion technique to detect hotspots in NOAA/AVHRR images of Jharia coalfield, India - article no. 013523

    Energy Technology Data Exchange (ETDEWEB)

    Gautam, R.S.; Singh, D.; Mittal, A. [Indian Institute of Technology Roorkee, Roorkee (India)

    2007-07-01

    Present paper proposes an algorithm for hotspot (sub-surface fire) detection in NOAA/AVHRR images in Jharia region of India by employing Principal Component Analysis (PCA) and fusion technique. Proposed technique is very simple to implement and is more adaptive in comparison to thresholding, multi-thresholding and contextual algorithms. The algorithm takes into account the information of AVHRR channels 1, 2, 3, 4 and vegetation indices NDVI and MSAVI for the required purpose. Proposed technique consists of three steps: (1) detection and removal of cloud and water pixels from preprocessed AVHRR image and screening out the noise of channel 3, (2) application of PCA on multi-channel information along with vegetation index information of NOAA/AVHRR image to obtain principal components, and (3) fusion of information obtained from principal component 1 and 2 to classify image pixels as hotspots. Image processing techniques are applied to fuse information in first two principal component images and no absolute threshold is incorporated to specify whether particular pixel belongs to hotspot class or not, hence, proposed method is adaptive in nature and works successfully for most of the AVHRR images with average 87.27% detection accuracy and 0.201% false alarm rate while comparing with ground truth points in Jharia region of India.

  7. Application of mass spectrometry based electronic nose and chemometrics for fingerprinting radiation treatment

    International Nuclear Information System (INIS)

    Gupta, Sumit; Variyar, Prasad S.; Sharma, Arun

    2015-01-01

    Volatile compounds were isolated from apples and grapes employing solid phase micro extraction (SPME) and subsequently analyzed by GC/MS equipped with a transfer line without stationary phase. Single peak obtained was integrated to obtain total mass spectrum of the volatile fraction of samples. A data matrix having relative abundance of all mass-to-charge ratios was subjected to principal component analysis (PCA) and linear discriminant analysis (LDA) to identify radiation treatment. PCA results suggested that there is sufficient variability between control and irradiated samples to build classification models based on supervised techniques. LDA successfully aided in segregating control from irradiated samples at all doses (0.1, 0.25, 0.5, 1.0, 1.5, 2.0 kGy). SPME-MS with chemometrics was successfully demonstrated as simple screening method for radiation treatment. - Highlights: • Total mass spectra obtained from HS-MS for control and irradiated fruits. • Grapes and apples are chosen for present study. • Total mass spectrum was analyzed by two chemometric techniques (PCA and LDA). • Successful segregation of control and irradiated samples achieved using chemometrics

  8. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    Science.gov (United States)

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  9. Preliminary study of soil permeability properties using principal component analysis

    Science.gov (United States)

    Yulianti, M.; Sudriani, Y.; Rustini, H. A.

    2018-02-01

    Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.

  10. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    OpenAIRE

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  11. Multiobjective Optimization of ELID Grinding Process Using Grey Relational Analysis Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Prabhu

    2014-06-01

    Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.

  12. Registration of dynamic dopamine D{sub 2}receptor images using principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Acton, P.D.; Ell, P.J. [Institute of Nuclear Medicine, University College London Medical School, London (United Kingdom); Pilowsky, L.S.; Brammer, M.J. [Institute of Psychiatry, De Crespigny Park, London (United Kingdom); Suckling, J. [Clinical Age Research Unit, Kings College School of Medicine and Dentistry, London (United Kingdom)

    1997-11-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D{sub 2}receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D {sub 2}receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P <0.001) improvements in image registration, leading to alignment errors in x and y of about 25% of the alternative methods, with reductions in autocorrelations over time. It could also be applied to align image sequences which the other methods failed completely to register, particularly {sup 123}I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the {chi}{sup 2}of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.) With 4 figs., 2 tabs., 26 refs.

  13. Application of empirical orthogonal functions or principal component analysis to environmental variability data

    International Nuclear Information System (INIS)

    Carvajal Escobar, Yesid; Marco Segura, Juan B

    2005-01-01

    An EOF analysis or principal component analysis (PC) was made for monthly precipitation (1972-1998) using 50 stations, and for monthly rate of flow (1951-2000) at 8 stations in the Valle del Cauca state, Colombia. Previously, we had applied 5 measures in order to verify the convenience of the analysis. These measures were: i) evaluation of significance level of correlation between variables; II) the kaiser-Meyer-Oikin (KMO) test; III) the Bartlett sphericity test; (IV) the measurement of sample adequacy (MSA), and v) the percentage of non-redundant residues with absolute values>0.05. For the selection of the significant PCS in every set of variables we applied seven criteria: the graphical method, the explained variance percentage, the mean root, the tests of Velicer, Bartlett, Broken Stich and the cross validation test. We chose the latter as the best one. It is robust and quantitative. Precipitation stations were divided in three homogeneous groups, applying a hierarchical cluster analysis, which was verified through the geographic method and the discriminate analysis for the first four EOFs of precipitation. There are many advantages to the EOF method: reduction of the dimensionality of multivariate data, calculation of missing data, evaluation and reduction of multi-co linearity, building of homogeneous groups, and detection of outliers. With the first four principal components we can explain 60.34% of the total variance of monthly precipitation for the Valle del Cauca state, and 94% of the total variance for the selected records of rates of flow

  14. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  15. New approach based on fuzzy logic and principal component analysis for the classification of two-dimensional maps in health and disease. Application to lymphomas.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Righetti, Pier Giorgio; Antonucci, Francesca

    2003-07-04

    Two-dimensional (2D) electrophoresis is the most wide spread technique for the separation of proteins in biological systems. This technique produces 2D maps of high complexity, which creates difficulties in the comparison of different samples. The method proposed in this paper for the comparison of different 2D maps can be summarised in four steps: (a) digitalisation of the image; (b) fuzzyfication of the digitalised map in order to consider the variability of the two-dimensional electrophoretic separation; (c) decoding by principal component analysis of the previously obtained fuzzy maps, in order to reduce the system dimensionality; (d) classification analysis (linear discriminant analysis), in order to separate the samples contained in the dataset according to the classes present in said dataset. This method was applied to a dataset constituted by eight samples: four belonging to healthy human lymph-nodes and four deriving from non-Hodgkin lymphomas. The amount of fuzzyfication of the original map is governed by the sigma parameter. The larger the value, the more fuzzy theresulting transformed map. The effect of the fuzzyfication parameter was investigated, the optimal results being obtained for sigma = 1.75 and 2.25. Principal component analysis and linear discriminant analysis allowed the separation of the two classes of samples without any misclassification.

  16. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

    Science.gov (United States)

    Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

    2018-06-01

    The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. 14 CFR 119.47 - Maintaining a principal base of operations, main operations base, and main maintenance base...

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Maintaining a principal base of operations, main operations base, and main maintenance base; change of address. 119.47 Section 119.47 Aeronautics... Under Part 121 or Part 135 of This Chapter § 119.47 Maintaining a principal base of operations, main...

  18. An assessment of ultraviolet radiation components of light emitted ...

    African Journals Online (AJOL)

    An assessment of ultraviolet radiation components of light emitted from electric arc and their possible exposure risks. ... The study of Ultraviolet Radiation has of recent become interesting because of the health hazards it poses to human. Apart from its intensity reaching the earth from the sun, other man-made sources have ...

  19. Cherenkov radiation-based three-dimensional position-sensitive PET detector: A Monte Carlo study.

    Science.gov (United States)

    Ota, Ryosuke; Yamada, Ryoko; Moriya, Takahiro; Hasegawa, Tomoyuki

    2018-05-01

    Cherenkov radiation has recently received attention due to its prompt emission phenomenon, which has the potential to improve the timing performance of radiation detectors dedicated to positron emission tomography (PET). In this study, a Cherenkov-based three-dimensional (3D) position-sensitive radiation detector was proposed, which is composed of a monolithic lead fluoride (PbF 2 ) crystal and a photodetector array of which the signals can be readout independently. Monte Carlo simulations were performed to estimate the performance of the proposed detector. The position- and time resolution were evaluated under various practical conditions. The radiator size and various properties of the photodetector, e.g., readout pitch and single photon timing resolution (SPTR), were parameterized. The single photon time response of the photodetector was assumed to be a single Gaussian for the simplification. The photo detection efficiency of the photodetector was ideally 100% for all wavelengths. Compton scattering was included in simulations, but partly analyzed. To estimate the position at which a γ-ray interacted in the Cherenkov radiator, the center-of-gravity (COG) method was employed. In addition, to estimate the depth-of-interaction (DOI) principal component analysis (PCA), which is a multivariate analysis method and has been used to identify the patterns in data, was employed. The time-space distribution of Cherenkov photons was quantified to perform PCA. To evaluate coincidence time resolution (CTR), the time difference of two independent γ-ray events was calculated. The detection time was defined as the first photon time after the SPTR of the photodetector was taken into account. The position resolution on the photodetector plane could be estimated with high accuracy, by using a small number of Cherenkov photons. Moreover, PCA showed an ability to estimate the DOI. The position resolution heavily depends on the pitch of the photodetector array and the radiator

  20. Counterbalanced radiation detection device

    International Nuclear Information System (INIS)

    Platz, W.

    1986-01-01

    A counterbalanced radiation detection device is described which consists of: (a) a base; (b) a radiation detector having a known weight; (c) means connected with the radiation detector and the base for positioning the radiation detector in different heights with respect to the base; (d) electronic component means movably mounted on the base for counterbalancing the weight of the radiation detector; (e) means connected with the electronic component means and the radiation detector positioning means for positioning the electronic component means in different heights with respect to the base opposite to the heights of the radiation detector; (f) means connected with the radiation detector and the base for shifting the radiation detector horizontally with respect to the base; and (g) means connected with the electronic component means and the radiation detector shifting means for shifting the electronic component means horizontally with respect to the base in opposite direction to shifting of the radiation detector

  1. Integrative sparse principal component analysis of gene expression data.

    Science.gov (United States)

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  2. Health status monitoring for ICU patients based on locally weighted principal component analysis.

    Science.gov (United States)

    Ding, Yangyang; Ma, Xin; Wang, Youqing

    2018-03-01

    Intelligent status monitoring for critically ill patients can help medical stuff quickly discover and assess the changes of disease and then make appropriate treatment strategy. However, general-type monitoring model now widely used is difficult to adapt the changes of intensive care unit (ICU) patients' status due to its fixed pattern, and a more robust, efficient and fast monitoring model should be developed to the individual. A data-driven learning approach combining locally weighted projection regression (LWPR) and principal component analysis (PCA) is firstly proposed and applied to monitor the nonlinear process of patients' health status in ICU. LWPR is used to approximate the complex nonlinear process with local linear models, in which PCA could be further applied to status monitoring, and finally a global weighted statistic will be acquired for detecting the possible abnormalities. Moreover, some improved versions are developed, such as LWPR-MPCA and LWPR-JPCA, which also have superior performance. Eighteen subjects were selected from the Physiobank's Multi-parameter Intelligent Monitoring for Intensive Care II (MIMIC II) database, and two vital signs of each subject were chosen for online monitoring. The proposed method was compared with several existing methods including traditional PCA, Partial least squares (PLS), just in time learning combined with modified PCA (L-PCA), and Kernel PCA (KPCA). The experimental results demonstrated that the mean fault detection rate (FDR) of PCA can be improved by 41.7% after adding LWPR. The mean FDR of LWPR-MPCA was increased by 8.3%, compared with the latest reported method L-PCA. Meanwhile, LWPR spent less training time than others, especially KPCA. LWPR is first introduced into ICU patients monitoring and achieves the best monitoring performance including adaptability to changes in patient status, sensitivity for abnormality detection as well as its fast learning speed and low computational complexity. The algorithm

  3. Informing principal policy reforms in South Africa through data-based evidence

    Directory of Open Access Journals (Sweden)

    Gabrielle Wills

    2015-12-01

    Full Text Available In the past decade there has been a notable shift in South African education policy that raises the value of school leadership as a lever for learning improvements. Despite a growing discourse on school leadership, there has been a lack of empirical based evidence on principals to inform, validate or debate the efficacy of proposed policies in raising the calibre of school principals. Drawing on findings from a larger study to understand the labour market for school principals in South Africa, this paper highlights four overarching characteristics of this market with implications for informing principal policy reforms. The paper notes that improving the design and implementation of policies guiding the appointment process for principals is a matter of urgency. A substantial and increasing number of principal replacements are taking place across South African schools given a rising age profile of school principals. In a context of low levels of principal mobility and high tenure, the leadership trajectory of the average school is established for nearly a decade with each principal replacement. Evidence-based policy making has a strong role to play in getting this right.

  4. Neural Network for Principal Component Analysis with Applications in Image Compression

    Directory of Open Access Journals (Sweden)

    Luminita State

    2007-04-01

    Full Text Available Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the "curse of dimensionality", improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.

  5. Improving the use of principal component analysis to reduce physiological noise and motion artifacts to increase the sensitivity of task-based fMRI.

    Science.gov (United States)

    Soltysik, David A; Thomasson, David; Rajan, Sunder; Biassou, Nadia

    2015-02-15

    Functional magnetic resonance imaging (fMRI) time series are subject to corruption by many noise sources, especially physiological noise and motion. Researchers have developed many methods to reduce physiological noise, including RETROICOR, which retroactively removes cardiac and respiratory waveforms collected during the scan, and CompCor, which applies principal components analysis (PCA) to remove physiological noise components without any physiological monitoring during the scan. We developed four variants of the CompCor method. The optimized CompCor method applies PCA to time series in a noise mask, but orthogonalizes each component to the BOLD response waveform and uses an algorithm to determine a favorable number of components to use as "nuisance regressors." Whole brain component correction (WCompCor) is similar, except that it applies PCA to time-series throughout the whole brain. Low-pass component correction (LCompCor) identifies low-pass filtered components throughout the brain, while high-pass component correction (HCompCor) identifies high-pass filtered components. We compared the new methods with the original CompCor method by examining the resulting functional contrast-to-noise ratio (CNR), sensitivity, and specificity. (1) The optimized CompCor method increased the CNR and sensitivity compared to the original CompCor method and (2) the application of WCompCor yielded the best improvement in the CNR and sensitivity. The sensitivity of the optimized CompCor, WCompCor, and LCompCor methods exceeded that of the original CompCor method. However, regressing noise signals showed a paradoxical consequence of reducing specificity for all noise reduction methods attempted. Published by Elsevier B.V.

  6. Independent principal component analysis for simulation of soil water content and bulk density in a Canadian Watershed

    Directory of Open Access Journals (Sweden)

    Alaba Boluwade

    2016-09-01

    Full Text Available Accurate characterization of soil properties such as soil water content (SWC and bulk density (BD is vital for hydrologic processes and thus, it is importance to estimate θ (water content and ρ (soil bulk density among other soil surface parameters involved in water retention and infiltration, runoff generation and water erosion, etc. The spatial estimation of these soil properties are important in guiding agricultural management decisions. These soil properties vary both in space and time and are correlated. Therefore, it is important to find an efficient and robust technique to simulate spatially correlated variables. Methods such as principal component analysis (PCA and independent component analysis (ICA can be used for the joint simulations of spatially correlated variables, but they are not without their flaws. This study applied a variant of PCA called independent principal component analysis (IPCA that combines the strengths of both PCA and ICA for spatial simulation of SWC and BD using the soil data set from an 11 km2 Castor watershed in southern Quebec, Canada. Diagnostic checks using the histograms and cumulative distribution function (cdf both raw and back transformed simulations show good agreement. Therefore, the results from this study has potential in characterization of water content variability and bulk density variation for precision agriculture.

  7. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    Science.gov (United States)

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach

  8. Experimental Investigation of Principal Residual Stress and Fatigue Performance for Turned Nickel-Based Superalloy Inconel 718.

    Science.gov (United States)

    Hua, Yang; Liu, Zhanqiang

    2018-05-24

    Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.

  9. Experimental Investigation of Principal Residual Stress and Fatigue Performance for Turned Nickel-Based Superalloy Inconel 718

    Directory of Open Access Journals (Sweden)

    Yang Hua

    2018-05-01

    Full Text Available Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.

  10. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Directory of Open Access Journals (Sweden)

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  11. Principal component analysis in an experimental cold flow model of a fluid catalytic cracking unit by gammametry

    International Nuclear Information System (INIS)

    Araujo, Janeo Severino C. de; Dantas, Carlos Costa; Santos, Valdemir A. dos; Souza, Jose Edson G. de; Luna-Finkler, Christine L.

    2009-01-01

    The fluid dynamic behavior of riser of a cold flow model of a Fluid Catalytic Cracking Unit (FCCU) was investigated. The experimental data were obtained by the nuclear technique of gamma transmission. A gamma source was placed diametrically opposite to a detector in any straight section of the riser. The gas-solid flow through riser was monitored with a source of Americium-241 what allowed obtaining information of the axial solid concentration without flow disturbance and also identifying the dependence of this concentration profile with several independent variables. The MatLab R and Statistica R software were used. Statistica tool employed was the Principal Components Analysis (PCA), that consisted of the job of the data organization, through two-dimensional head offices to allow extract relevant information about the importance of the independent variables on axial solid concentration in a cold flow riser. The variables investigated were mass flow rate of solid, mass flow rate of gas, pressure in the riser base and the relative height in the riser. The first two components reached about 98 % of accumulated percentage of explained variance. (author)

  12. A new approach for heparin standardization: combination of scanning UV spectroscopy, nuclear magnetic resonance and principal component analysis.

    Directory of Open Access Journals (Sweden)

    Marcelo A Lima

    Full Text Available The year 2007 was marked by widespread adverse clinical responses to heparin use, leading to a global recall of potentially affected heparin batches in 2008. Several analytical methods have since been developed to detect impurities in heparin preparations; however, many are costly and dependent on instrumentation with only limited accessibility. A method based on a simple UV-scanning assay, combined with principal component analysis (PCA, was developed to detect impurities, such as glycosaminoglycans, other complex polysaccharides and aromatic compounds, in heparin preparations. Results were confirmed by NMR spectroscopy. This approach provides an additional, sensitive tool to determine heparin purity and safety, even when NMR spectroscopy failed, requiring only standard laboratory equipment and computing facilities.

  13. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  14. A flexible framework for sparse simultaneous component based data integration.

    Science.gov (United States)

    Van Deun, Katrijn; Wilderjans, Tom F; van den Berg, Robert A; Antoniadis, Anestis; Van Mechelen, Iven

    2011-11-15

    High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are exclusively tied to one data platform

  15. Characterization of deep aquifer dynamics using principal component analysis of sequential multilevel data

    Directory of Open Access Journals (Sweden)

    D. Kurtzman

    2012-03-01

    Full Text Available Two sequential multilevel profiles were obtained in an observation well opened to a 130-m thick, unconfined, contaminated aquifer in Tel Aviv, Israel. While the general profile characteristics of major ions, trace elements, and volatile organic compounds were maintained in the two sampling campaigns conducted 295 days apart, the vertical locations of high concentration gradients were shifted between the two profiles. Principal component analysis (PCA of the chemical variables resulted in a first principal component which was responsible for ∼60% of the variability, and was highly correlated with depth. PCA revealed three distinct depth-dependent water bodies in both multilevel profiles, which were found to have shifted vertically between the sampling events. This shift cut across a clayey bed which separated the top and intermediate water bodies in the first profile, and was located entirely within the intermediate water body in the second profile. Continuous electrical conductivity monitoring in a packed-off section of the observation well revealed an event in which a distinct water body flowed through the monitored section (v ∼ 150 m yr−1. It was concluded that the observed changes in the profiles result from dominantly lateral flow of water bodies in the aquifer rather than vertical flow. The significance of this study is twofold: (a it demonstrates the utility of sequential multilevel observations from deep wells and the efficacy of PCA for evaluating the data; (b the fact that distinct water bodies of 10 to 100 m vertical and horizontal dimensions flow under contaminated sites, which has implications for monitoring and remediation.

  16. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    GENERAL ARTICLE. Principal ... Chemometrics is a discipline that combines mathematics, statis- ... workers have used PCA for air quality monitoring [8]. ..... J S Verbeke, Handbook of Chemometrics and Qualimetrics, Elsevier, New York,.

  17. Global and system-specific resting-state fMRI fluctuations are uncorrelated: principal component analysis reveals anti-correlated networks.

    Science.gov (United States)

    Carbonell, Felix; Bellec, Pierre; Shmuel, Amir

    2011-01-01

    The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti

  18. Structure and Principal Components Analyses Reveal an Intervarietal Fusion in Malaysian Mistletoe Fig (Ficus deltoidea Jack Populations

    Directory of Open Access Journals (Sweden)

    Birifdzi Zimisuhara

    2015-06-01

    Full Text Available Genetic structure and biodiversity of the medicinal plant Ficus deltoidea have rarely been scrutinized. To fill these lacunae, five varieties, consisting of 30 F. deltoidea accessions were collected across the country and studied on the basis of molecular and morphological data. Molecular analysis of the accessions was performed using nine Inter Simple Sequence Repeat (ISSR markers, seven of which were detected as polymorphic markers. ISSR-based clustering generated four clusters supporting the geographical distribution of the accessions to some extent. The Jaccard’s similarity coefficient implied the existence of low diversity (0.50–0.75 in the studied population. STRUCTURE analysis showed a low differentiation among the sampling sites, while a moderate varietal differentiation was unveiled with two main populations of F. deltoidea. Our observations confirmed the occurrence of gene flow among the accessions; however, the highest degree of this genetic interference was related to the three accessions of FDDJ10, FDTT16 and FDKT25. These three accessions may be the genetic intervarietal fusion points of the plant’s population. Principal Components Analysis (PCA relying on quantitative morphological characteristics resulted in two principal components with Eigenvalue >1 which made up 89.96% of the total variation. The cluster analysis performed by the eight quantitative characteristics led to grouping the accessions into four clusters with a Euclidean distance ranged between 0.06 and 1.10. Similarly, a four-cluster dendrogram was generated using qualitative traits. The qualitative characteristics were found to be more discriminating in the cluster and PCA analyses, while ISSRs were more informative on the evolution and genetic structure of the population.

  19. Principal component analysis of sensory properties of chicken breast muscle supplemented with different feed additives

    Directory of Open Access Journals (Sweden)

    Peter Haščík

    2017-01-01

    Full Text Available The objective of the present study was to examine the effect of different dietary supplements (bee pollen, propolis, and probiotic on sensory quality of chicken breast muscle. The experiment was performed with 180 one day-old Ross 308 broiler chicks of mixed sex. The dietary treatments were as follows: 1. basal diet with no supplementation as control (C; 2. basal diet plus 400 mg bee pollen extract per 1 kg of feed mixture (E1; 3. basal diet plus 400 mg propolis extract per 1 kg of feed mixture (E2; 4. basal diet plus 3.3 g probiotic preparation based on Lactobacillus fermentum added to drinking water (E3. Sensory properties of chicken breast muscle were assessed by a five-member panel that rated the meat for aroma, taste, juiciness, tenderness and overall acceptability. The ANOVA results for each attribute showed that at least one mean score for any group differs significantly (p ≤0.05. Subsequent Tukey's HSD revealed that only C group had significantly higher mean score (p ≤0.05 for each attribute compared with E2 group. As regards the E1 and E3 groups, there were not significant differences (p >0.05 in aroma, taste and tenderness when compared to C group, with the significantly lowest juiciness value (p ≤0.05 found in E3 group and significantly lower values of overall acceptability in both groups (p ≤0.05. In addition, it is noteworthy that control group received the highest raking scores for each sensory attribute, i.e. the supplements did not influence positively the sensory quality of chicken breast meat. Principal component analysis (PCA of the sensory data showed that the first 3 principal components (PCs explained 69.82% of the total variation in 5 variables. Visualisation of extracted PCs has shown that groups were very well represented, with E2 group clearly distinguished from the others.  Normal 0 21 false false false SK X-NONE X-NONE

  20. Spectral backward radiation profile

    International Nuclear Information System (INIS)

    Kwon, Sung Duck; Lee, Keun Hyun; Kim, Bo Ra; Yoon, Suk Soo

    2004-01-01

    Ultrasonic backward radiation profile is frequency-dependent when incident region has deptional gradient of acoustical properties or multi-layers. Until now, we have measured the profiles of principal frequencies of used transducers so that it was not easy to understand the change of the frequency component and spectrum of backward radiation from the profile. We tried to measure the spectral backward radiation profiles using DFP(digital filer package) Lecroy DSO. The very big changes in the shape and pattern of spectral backward radiation profiles leads to the conclusion that this new try could be very effective tool to evaluate frequency dependent surface area.

  1. Application of principal component analysis to multispectral imaging data for evaluation of pigmented skin lesions

    Science.gov (United States)

    Jakovels, Dainis; Lihacova, Ilze; Kuzmina, Ilona; Spigulis, Janis

    2013-11-01

    Non-invasive and fast primary diagnostics of pigmented skin lesions is required due to frequent incidence of skin cancer - melanoma. Diagnostic potential of principal component analysis (PCA) for distant skin melanoma recognition is discussed. Processing of the measured clinical multi-spectral images (31 melanomas and 94 nonmalignant pigmented lesions) in the wavelength range of 450-950 nm by means of PCA resulted in 87 % sensitivity and 78 % specificity for separation between malignant melanomas and pigmented nevi.

  2. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    Science.gov (United States)

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  3. An analysis of workers' morale in the coal mining industry using principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, J; La Court, C; Pearson, J M

    1987-03-01

    This paper looks at labour morale in the coal mining industry from 1967 to 1984. In particular it examines absenteeism, turnover and accidents over that period, as well as constructing an index of morale based on these variables. The data are taken from the North Nottinghamshire and South Yorkshire coal areas, and a comparison is made between these areas in the period leading up to the industrial action in 1984/85. The indices constructed indicate that morale, as measured by the first principal component, increased considerably during the years before the 1984 industrial dispute and that low morale was an unlikely reason for the dispute, although morale in South Yorkshire, a strike area, was lower than in North Nottinghamshire, largely a non-strike area. The steep rise in morale in both North Nottinghamshire and South Yorkshire follows closely the rise in unemployment nationally and may simply be an indication of conventional industrial relations assumptions that manifestations of negative worker attitudes are greatest when jobs are relatively plentiful, and considerably less so when jobs are scarce.

  4. Classification of fault diagnosis in a gear wheel by used probabilistic neural network, fast Fourier transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Piotr CZECH

    2007-01-01

    Full Text Available This paper presents the results of an experimental application of artificial neural network as a classifier of the degree of cracking of a tooth root in a gear wheel. The neural classifier was based on the artificial neural network of Probabilistic Neural Network type (PNN. The input data for the classifier was in a form of matrix composedof statistical measures, obtained from fast Fourier transform (FFT and principal component analysis (PCA. The identified model of toothed gear transmission, operating in a circulating power system, served for generation of the teaching and testing set applied for the experiment.

  5. Natural radiation doses for cosmic and terrestrial components in Costa Rica

    International Nuclear Information System (INIS)

    Mora, Patricia; Picado, Esteban; Minato, Susumu

    2007-01-01

    A study of external natural radiation, cosmic and terrestrial components, was carried out with in situ measurements using NaI scintillation counters while driving along the roads in Costa Rica for the period July 2003-July 2005. The geographical distribution of the terrestrial air-absorbed dose rates and the total effective dose rates (including cosmic) are represented on contour maps. Information on the population density of the country permitted the calculation of the per capita doses. The average effective dose for the total cosmic component was 46.88±18.06 nSv h -1 and the average air-absorbed dose for the terrestrial component was 29.52±14.46 nGy h -1 . The average total effective dose rate (cosmic plus terrestrial components) was 0.60±0.18 mSv per year. The effective dose rate per capita was found to be 83.97 nSv h -1 which gives an annual dose of 0.74 mSv. Assuming the world average for the internal radiation component, the natural radiation dose for Costa Rica will be 2.29 mSv annually

  6. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  7. Principal components analysis of protein structure ensembles calculated using NMR data

    International Nuclear Information System (INIS)

    Howe, Peter W.A.

    2001-01-01

    One important problem when calculating structures of biomolecules from NMR data is distinguishing converged structures from outlier structures. This paper describes how Principal Components Analysis (PCA) has the potential to classify calculated structures automatically, according to correlated structural variation across the population. PCA analysis has the additional advantage that it highlights regions of proteins which are varying across the population. To apply PCA, protein structures have to be reduced in complexity and this paper describes two different representations of protein structures which achieve this. The calculated structures of a 28 amino acid peptide are used to demonstrate the methods. The two different representations of protein structure are shown to give equivalent results, and correct results are obtained even though the ensemble of structures used as an example contains two different protein conformations. The PCA analysis also correctly identifies the structural differences between the two conformations

  8. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Effects of radiations on electronic components - Course IN2P3, release 6

    International Nuclear Information System (INIS)

    2007-01-01

    As many off-the-shelf electronic components are now present onboard satellites, launchers and planes, this course proposes an overview of effects radiations can have on these components, notably in space applications. A first part proposes an overview of radiative environments, and more particularly presents the space radiative environment (solar wind, solar flares, cosmic radiation, radiation belts). It also presents the atmospheric and Earth radiative environment due to cosmic radiation, the alpha radiation (origin of particles, particle flow), the radiative environment within an accelerator. The second part addresses the effects of these radiative environments on electronic components, and the associated standards and tests. It addresses cumulative effects and proposes a detailed analysis of the effects of an ionizing dose on a MOS transistor, an analysis of the effects of ionising dose rate on a bipolar NPN or PNP vertical or lateral transistor, an analysis of the effects of atomic displacements, and a discussion of structure modifications. The next part describes various single events: the Single Event Upset (SEU) and the Multiple Bit Upset (MBU) in the case of a SRAM, the SEL (Single Event Latch-up) phenomenon, the SEGR (Single Event Gate Rupture) phenomenon in the case of a Power MOSFET, and the SEB (Single Event Burnout) phenomenon in the case of a Power MOSFET

  10. New Role of Thermal Mapping in Winter Maintenance with Principal Components Analysis

    Directory of Open Access Journals (Sweden)

    Mario Marchetti

    2014-01-01

    Full Text Available Thermal mapping uses IR thermometry to measure road pavement temperature at a high resolution to identify and to map sections of the road network prone to ice occurrence. However, measurements are time-consuming and ultimately only provide a snapshot of road conditions at the time of the survey. As such, there is a need for surveys to be restricted to a series of specific climatic conditions during winter. Typically, five to six surveys are used, but it is questionable whether the full range of atmospheric conditions is adequately covered. This work investigates the role of statistics in adding value to thermal mapping data. Principal components analysis is used to interpolate between individual thermal mapping surveys to build a thermal map (or even a road surface temperature forecast, for a wider range of climatic conditions than that permitted by traditional surveys. The results indicate that when this approach is used, fewer thermal mapping surveys are actually required. Furthermore, comparisons with numerical models indicate that this approach could yield a suitable verification method for the spatial component of road weather forecasts—a key issue currently in winter road maintenance.

  11. Diagnosing basal cell carcinoma in vivo by near-infrared Raman spectroscopy: a Principal Components Analysis discrimination algorithm

    Science.gov (United States)

    Silveira, Landulfo, Jr.; Silveira, Fabrício L.; Bodanese, Benito; Pacheco, Marcos Tadeu T.; Zângaro, Renato A.

    2012-02-01

    This work demonstrated the discrimination among basal cell carcinoma (BCC) and normal human skin in vivo using near-infrared Raman spectroscopy. Spectra were obtained in the suspected lesion prior resectional surgery. After tissue withdrawn, biopsy fragments were submitted to histopathology. Spectra were also obtained in the adjacent, clinically normal skin. Raman spectra were measured using a Raman spectrometer (830 nm) with a fiber Raman probe. By comparing the mean spectra of BCC with the normal skin, it has been found important differences in the 800-1000 cm-1 and 1250-1350 cm-1 (vibrations of C-C and amide III, respectively, from lipids and proteins). A discrimination algorithm based on Principal Components Analysis and Mahalanobis distance (PCA/MD) could discriminate the spectra of both tissues with high sensitivity and specificity.

  12. Information fusion via constrained principal component regression for robust quantification with incomplete calibrations

    International Nuclear Information System (INIS)

    Vogt, Frank

    2013-01-01

    Graphical abstract: Analysis Task: Determine the albumin (= protein) concentration in microalgae cells as a function of the cells’ nutrient availability. Left Panel: The predicted albumin concentrations as obtained by conventional principal component regression features low reproducibility and are partially higher than the concentrations of algae in which albumin is contained. Right Panel: Augmenting an incomplete PCR calibration with additional expert information derives reasonable albumin concentrations which now reveal a significant dependency on the algae's nutrient situation. -- Highlights: •Make quantitative analyses of compounds embedded in largely unknown chemical matrices robust. •Improved concentration prediction with originally insufficient calibration models. •Chemometric approach for incorporating expertise from other fields and/or researchers. •Ensure chemical, biological, or medicinal meaningfulness of quantitative analyses. -- Abstract: Incomplete calibrations are encountered in many applications and hamper chemometric data analyses. Such situations arise when target analytes are embedded in a chemically complex matrix from which calibration concentrations cannot be determined with reasonable efforts. In other cases, the samples’ chemical composition may fluctuate in an unpredictable way and thus cannot be comprehensively covered by calibration samples. The reason for calibration model to fail is the regression principle itself which seeks to explain measured data optimally in terms of the (potentially incomplete) calibration model but does not consider chemical meaningfulness. This study presents a novel chemometric approach which is based on experimentally feasible calibrations, i.e. concentration series of the target analytes outside the chemical matrix (‘ex situ calibration’). The inherent lack-of-information is then compensated by incorporating additional knowledge in form of regression constraints. Any outside knowledge can be

  13. 3D face recognition with asymptotic cones based principal curvatures

    KAUST Repository

    Tang, Yinhang

    2015-05-01

    The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.

  14. 3D face recognition with asymptotic cones based principal curvatures

    KAUST Repository

    Tang, Yinhang; Sun, Xiang; Huang, Di; Morvan, Jean-Marie; Wang, Yunhong; Chen, Liming

    2015-01-01

    The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.

  15. On the development of radiation tolerant surveillance camera from consumer-grade components

    Directory of Open Access Journals (Sweden)

    Klemen Ambrožič

    2017-01-01

    Full Text Available In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.

  16. On the development of radiation tolerant surveillance camera from consumer-grade components

    Science.gov (United States)

    Klemen, Ambrožič; Luka, Snoj; Lars, Öhlin; Jan, Gunnarsson; Niklas, Barringer

    2017-09-01

    In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.

  17. Using principal component analysis and annual seasonal trend analysis to assess karst rocky desertification in southwestern China.

    Science.gov (United States)

    Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua

    2017-06-01

    Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3  km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.

  18. Cluster and principal component analysis based on SSR markers of Amomum tsao-ko in Jinping County of Yunnan Province

    Science.gov (United States)

    Ma, Mengli; Lei, En; Meng, Hengling; Wang, Tiantao; Xie, Linyan; Shen, Dong; Xianwang, Zhou; Lu, Bingyue

    2017-08-01

    Amomum tsao-ko is a commercial plant that used for various purposes in medicinal and food industries. For the present investigation, 44 germplasm samples were collected from Jinping County of Yunnan Province. Clusters analysis and 2-dimensional principal component analysis (PCA) was used to represent the genetic relations among Amomum tsao-ko by using simple sequence repeat (SSR) markers. Clustering analysis clearly distinguished the samples groups. Two major clusters were formed; first (Cluster I) consisted of 34 individuals, the second (Cluster II) consisted of 10 individuals, Cluster I as the main group contained multiple sub-clusters. PCA also showed 2 groups: PCA Group 1 included 29 individuals, PCA Group 2 included 12 individuals, consistent with the results of cluster analysis. The purpose of the present investigation was to provide information on genetic relationship of Amomum tsao-ko germplasm resources in main producing areas, also provide a theoretical basis for the protection and utilization of Amomum tsao-ko resources.

  19. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    Science.gov (United States)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.

  20. Principal component analysis of MSBAS DInSAR time series from Campi Flegrei, Italy

    Science.gov (United States)

    Tiampo, Kristy F.; González, Pablo J.; Samsonov, Sergey; Fernández, Jose; Camacho, Antonio

    2017-09-01

    Because of its proximity to the city of Naples and with a population of nearly 1 million people within its caldera, Campi Flegrei is one of the highest risk volcanic areas in the world. Since the last major eruption in 1538, the caldera has undergone frequent episodes of ground subsidence and uplift accompanied by seismic activity that has been interpreted as the result of a stationary, deeper source below the caldera that feeds shallower eruptions. However, the location and depth of the deeper source is not well-characterized and its relationship to current activity is poorly understood. Recently, a significant increase in the uplift rate has occurred, resulting in almost 13 cm of uplift by 2013 (De Martino et al., 2014; Samsonov et al., 2014b; Di Vito et al., 2016). Here we apply a principal component decomposition to high resolution time series from the region produced by the advanced Multidimensional SBAS DInSAR technique in order to better delineate both the deeper source and the recent shallow activity. We analyzed both a period of substantial subsidence (1993-1999) and a second of significant uplift (2007-2013) and inverted the associated vertical surface displacement for the most likely source models. Results suggest that the underlying dynamics of the caldera changed in the late 1990s, from one in which the primary signal arises from a shallow deflating source above a deeper, expanding source to one dominated by a shallow inflating source. In general, the shallow source lies between 2700 and 3400 m below the caldera while the deeper source lies at 7600 m or more in depth. The combination of principal component analysis with high resolution MSBAS time series data allows for these new insights and confirms the applicability of both to areas at risk from dynamic natural hazards.

  1. Principal component analysis of global warming with respect to CO{sub 2} emission in Nigeria: an exploratory study

    Energy Technology Data Exchange (ETDEWEB)

    Igwenagu, C.M. [Enugu State University of Science and Technology (Nigeria). Dept. of Industrial Mathematics, Applied Statistics and Demography

    2011-07-01

    This study has examined the position of Nigeria in relation to carbon dioxide (CO{sub 2}) emission in readiness for emission trading as proposed in the Kyoto protocol as a measure of reducing global warming. It was discovered that Nigeria emits only 0.4% of the world's total CO{sub 2} emission indicating that they will be possible sellers of emission as contained in the Kyoto protocol. Fifty countries were selected for the analysis and some possible correlates of CO{sub 2} were considered. Correlation analysis and principal component analysis revealed that gross domestic product and industrial output accounted for 93% of the total variation. Based on this, a very low economic activity is being experienced in the country.

  2. InterFace: A software package for face image warping, averaging, and principal components analysis.

    Science.gov (United States)

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  3. Radiation chemistry of the base components of DNA and related substances

    International Nuclear Information System (INIS)

    Teoule, R.

    1979-01-01

    The loss of UV absorption may be considered as a useful index to evaluate the extent of base destruction. The variations observed reflect the sum of different phenomena: the modification of base stacking, hydrogen bond rupture between DNA bases and the saturation of conjugated double bonds of heterocycles. Another way to measure the base degradation is by formic acid hydrolysis. Radiation products are very sensitive to the formic acid hydrolysis performed at 180 deg C. In aerated solutions, an important event responsible for the degradation of pyrimidine bases is the formation of hydroperoxide. This review consists of the following subheadings: identification of the DNA base damages; thymine fragment in aerated solutions and in deaerated solutions; adenine fragment; and cytosine fragment. The review concludes with the remarks: one has to be very cautious in the extrapolation of the results obtained by the gamma irradiation of free bases in solution to DNA. Free bases are liberated but no nucleoside during irradiation. (Yamashita, S.)

  4. Irradiation tests of critical components for remote handling in gamma radiation environment

    International Nuclear Information System (INIS)

    Obara, Henjiro; Kakudate, Satoshi; Oka, Kiyoshi

    1994-08-01

    Since the fusion power core of a D-T fusion reactor will be highly activated once it starts operation, personnel access will be prohibited so that assembly and maintenance of the components in the reactor core will have to be totally conducted by remote handling technology. Fusion experimental reactors such as ITER require unprecedented remote handling equipments which are tolerable under gamma radiation of more than 10 6 R/h. For this purpose, the Japan Atomic Energy Research Institute (JAERI) has been developing radiation hard components for remote handling purpose and a number of key components have been tested over 10 9 rad at a radiation dose rate of around 10 6 R/h, using Gamma Ray Radiation Test Facility in JAERI-Takasaki Establishment. This report summarizes the irradiation test results and the latest status of AC servo motor, potentiometer, optical elements, lubricant, sensors and cables, which are key elements of the remote handling system. (author)

  5. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography.

    Science.gov (United States)

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  6. A Principal Component Analysis (PCA Approach to Seasonal and Zooplankton Diversity Relationships in Fishing Grounds of Mannar Gulf, India

    Directory of Open Access Journals (Sweden)

    Selvin J. PITCHAIKANI

    2017-06-01

    Full Text Available Principal component analysis (PCA is a technique used to emphasize variation and bring out strong patterns in a dataset. It is often used to make data easy to explore and visualize. The primary objective of the present study was to record information of zooplankton diversity in a systematic way and to study the variability and relationships among seasons prevailed in Gulf of Mannar. The PCA for the zooplankton seasonal diversity was investigated using the four seasonal datasets to understand the statistical significance among the four seasons. Two different principal components (PC were segregated in all the seasons homogeneously. PCA analyses revealed that Temora turbinata is an opportunistic species and zooplankton diversity was significantly different from season to season and principally, the zooplankton abundance and its dynamics in Gulf of Mannar is structured by seasonal current patterns. The factor loadings of zooplankton for different seasons in Tiruchendur coastal water (GOM is different compared with the Southwest coast of India; particularly, routine and opportunistic species were found within the positive and negative factors. The copepods Acrocalanus gracilis and Acartia erythrea were dominant in summer and Southwest monsoon due to the rainfall and freshwater discharge during the summer season; however, these species were replaced by Temora turbinata during Northeast monsoon season.

  7. Optimisation of a machine learning algorithm in human locomotion using principal component and discriminant function analyses

    OpenAIRE

    Bisele, M; Bencsik, M; Lewis, MGC; Barnett, CT

    2017-01-01

    Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a ...

  8. Simulation of an industrial wastewater treatment plant using artificial neural networks and principal components analysis

    Directory of Open Access Journals (Sweden)

    Oliveira-Esquerre K.P.

    2002-01-01

    Full Text Available This work presents a way to predict the biochemical oxygen demand (BOD of the output stream of the biological wastewater treatment plant at RIPASA S/A Celulose e Papel, one of the major pulp and paper plants in Brazil. The best prediction performance is achieved when the data are preprocessed using principal components analysis (PCA before they are fed to a backpropagated neural network. The influence of input variables is analyzed and satisfactory prediction results are obtained for an optimized situation.

  9. Transverse components of the radiation force on nonspherical particles in the T-matrix formalism

    International Nuclear Information System (INIS)

    Saija, Rosalba; Antonia Iati, Maria; Giusto, Arianna; Denti, Paolo; Borghese, Ferdinando

    2005-01-01

    In the framework of the transition matrix approach, we calculate the force exerted by a plane wave (radiation force) on a dispersion of nonspherical particles modeled as aggregates of spheres. Beyond the customary radiation pressure we also consider the components of the radiation force in a plane orthogonal to the direction of incidence of the incoming wave (transverse components). Our calculations show that, although the latter are generally smaller than the radiation pressure, they are in no way negligible and may be important for some applications, e.g. when studying the dynamics of cosmic dust grains. We also calculate the ensemble average of the components of the radiation force over the orientation of the particles in two physically significant cases: the case of random distribution and the case in which the orientations are randomly distributed around an axis fixed in space (axial average). As expected, we find that, unlike the case of random orientation, the transverse components do not vanish for axial average

  10. Real time damage detection using recursive principal components and time varying auto-regressive modeling

    Science.gov (United States)

    Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

    2018-02-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving

  11. Principal component analysis of the CT density histogram to generate parametric response maps of COPD

    Science.gov (United States)

    Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.

  12. Principal axis-based correspondence between multiple cameras for people tracking.

    Science.gov (United States)

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  13. Partitioning of radiation and energy balance components in an inhomogeneous desert valley

    International Nuclear Information System (INIS)

    Malek, E.; Bingham, G.E.

    1997-01-01

    radiation and energy balance components are also reported. Using just one pyranometer and these relationships, one can estimate the daily, monthly, and annual values of R so , R n , R ln , LE, H, and G surf . We have made a major step toward being able to estimate net radiation of heterogeneous desert landscape based on vegetation cover and irradiance levels. The results can be applied to any similar closed desert basin in Australia, the Great Basin in the U.S.A., the Middle East, and North Africa. (author)

  14. Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks.

    Science.gov (United States)

    Petersen, Alexander; Zhao, Jianyang; Carmichael, Owen; Müller, Hans-Georg

    2016-09-01

    In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich functional connectivity studies by providing increased power and flexibility for statistical inference. Specifically, individual connectivity curves are related to individual characteristics such as age and measures of cognitive function, thus providing a tool to relate brain connectivity with these variables at the individual level. This individual level analysis opens a new perspective that goes beyond previous group level comparisons. Using a large data set of resting-state functional magnetic resonance imaging scans, relationships between connectivity and two measures of cognitive function-episodic memory and executive function-were investigated. The group-based approach was implemented by dichotomizing the continuous cognitive variable and testing for group differences, resulting in no statistically significant findings. To demonstrate the new approach, FPCA was implemented, followed by linear regression models with cognitive scores as responses, identifying significant associations of connectivity in the right middle temporal region with both cognitive scores.

  15. Spatiotemporal Variability of Earth's Radiation Balance Components from Russian Radiometer IKOR-M

    Science.gov (United States)

    Cherviakov, M.

    2016-12-01

    The radiometer IKOR-M was created in National Research Saratov State University for satellite monitoring of the outgoing reflected short-wave radiation, which is one of the components of Earth's radiation budget. Such information can be used in different models of long-term weather forecasts, in researches of climate change trends and in calculation of absorbed solar radiation values and albedo of the Earth-atmosphere system. The IKOR-M product archive is available online at all times. A searchable catalogue of data products is continually updated and users may search and download data products via the Earth radiation balance components research laboratory website as soon as they become available. Two series of measurements from two different IKOR-M are available. The first radiometer had worked from October 2009 to August 2014 and second - from August 2014 to the present. Therefore, there is a period when both radiometers work at the same time. Top-of-atmosphere fluxes deduced from the "Meteor-M" No 1 measurement in August, 2014 show very good agreement with the fluxes determined from "Meteor-M" No 2. The scale relationship of the IKOR-M radiometers on "Meteor - M" No 1 and No 2 satellites found by comparing of the global distribution maps for monthly averaged albedo values. The seasonal and interannual variations of OSR, albedo and ASR were discussed. The variations between SW radiation budget components seem to be within observational uncertainty and natural variability governed by cloudiness, water vapor and aerosol variations. It was assessed spatial and temporal variations of albedo and the absorbed solar radiation over different regions. Latitudinal distributions of albedo and ASR were estimated in more detail. Meridional cross sections over oceans and land were used separately for this estimation. It was shown that the albedo and ASR data received from the radiometer IKOR-M can be used to detect El Nino in the Pacific Ocean. The reported study was funded by

  16. 基于主成分-聚类分析法的建筑节能气候区划%Further climatic zoning of building energy efficiency based on principal component analysis and cluster analysis

    Institute of Scientific and Technical Information of China (English)

    张慧玲; 付祥钊

    2012-01-01

    Taking HDD18, CDD26, average temperature of the coldest month, average temperature of the hottest month, solar radiation in winter, solar radiation in summer, humidity ratio in winter and humidity ratio in summer as division indexes based on the analysis of climate indexes impacting the building cooling and heating energy consumptions, applies the principal component analysis and cluster analysis to building energy efficiency climatic zoning of the 270 cities in China, which are divided into the severe-cold dry heat high-solar-radiation zone, severe-cold cool high-solar-radiation zone, severe-cold summer free zone, cold slight-hot zone, temperate-hot-humid zone, temperate-cool zone and raw hot-wet zone. Describes the main climatic characteristics and geographic distribution of the seven climate zones, and clarifies the key point of building energy efficiency and the appropriate technical strategies respectively.%通过分析影响建筑冷热耗量的气候指标,提出了以采暖度日数HDD18、空调度日数CDD26、最冷月平均温度、最热月平均温度、冬季太阳辐射热量、夏季太阳辐射热量、冬季含湿量和夏季含湿量8个气候指标作为建筑节能气候区划指标,采用主成分分析与聚类分析相结合的区划方法对我国270个城市进行了建筑节能气候区划,划分为严寒干热高辐区、严寒凉爽高辐区、严寒无夏区、寒冷微热区、温和炎热湿润区、温和凉爽区和阴冷湿热区,介绍了这7个气候区的主要气候特征和主要的地理范围,明确了各区的建筑节能重点和适宜的技术策略.

  17. Mapping ash properties using principal components analysis

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  18. Component design challenges for the ground-based SP-100 nuclear assembly test

    International Nuclear Information System (INIS)

    Markley, R.A.; Disney, R.K.; Brown, G.B.

    1989-01-01

    The SP-100 ground engineering system (GES) program involves a ground test of the nuclear subsystems to demonstrate their design. The GES nuclear assembly test (NAT) will be performed in a simulated space environment within a vessel maintained at ultrahigh vacuum. The NAT employs a radiation shielding system that is comprised of both prototypical and nonprototypical shield subsystems to attenuate the reactor radiation leakage and also nonprototypical heat transport subsystems to remove the heat generated by the reactor. The reactor is cooled by liquid lithium, which will operate at temperatures prototypical of the flight system. In designing the components for these systems, a number of design challenges were encountered in meeting the operational requirements of the simulated space environment (and where necessary, prototypical requirements) while also accommodating the restrictions of a ground-based test facility with its limited available space. This paper presents a discussion of the design challenges associated with the radiation shield subsystem components and key components of the heat transport systems

  19. Rationalization of paclitaxel insensitivity of yeast β-tubulin and human βIII-tubulin isotype using principal component analysis

    Directory of Open Access Journals (Sweden)

    Das Lalita

    2012-08-01

    Full Text Available Abstract Background The chemotherapeutic agent paclitaxel arrests cell division by binding to the hetero-dimeric protein tubulin. Subtle differences in tubulin sequences, across eukaryotes and among β-tubulin isotypes, can have profound impact on paclitaxel-tubulin binding. To capture the experimentally observed paclitaxel-resistance of human βIII tubulin isotype and yeast β-tubulin, within a common theoretical framework, we have performed structural principal component analyses of β-tubulin sequences across eukaryotes. Results The paclitaxel-resistance of human βIII tubulin isotype and yeast β-tubulin uniquely mapped on to the lowest two principal components, defining the paclitaxel-binding site residues of β-tubulin. The molecular mechanisms behind paclitaxel-resistance, mediated through key residues, were identified from structural consequences of characteristic mutations that confer paclitaxel-resistance. Specifically, Ala277 in βIII isotype was shown to be crucial for paclitaxel-resistance. Conclusions The present analysis captures the origin of two apparently unrelated events, paclitaxel-insensitivity of yeast tubulin and human βIII tubulin isotype, through two common collective sequence vectors.

  20. Ancestry inference using principal component analysis and spatial analysis: a distance-based analysis to account for population substructure.

    Science.gov (United States)

    Byun, Jinyoung; Han, Younghun; Gorlov, Ivan P; Busam, Jonathan A; Seldin, Michael F; Amos, Christopher I

    2017-10-16

    Accurate inference of genetic ancestry is of fundamental interest to many biomedical, forensic, and anthropological research areas. Genetic ancestry memberships may relate to genetic disease risks. In a genome association study, failing to account for differences in genetic ancestry between cases and controls may also lead to false-positive results. Although a number of strategies for inferring and taking into account the confounding effects of genetic ancestry are available, applying them to large studies (tens thousands samples) is challenging. The goal of this study is to develop an approach for inferring genetic ancestry of samples with unknown ancestry among closely related populations and to provide accurate estimates of ancestry for application to large-scale studies. In this study we developed a novel distance-based approach, Ancestry Inference using Principal component analysis and Spatial analysis (AIPS) that incorporates an Inverse Distance Weighted (IDW) interpolation method from spatial analysis to assign individuals to population memberships. We demonstrate the benefits of AIPS in analyzing population substructure, specifically related to the four most commonly used tools EIGENSTRAT, STRUCTURE, fastSTRUCTURE, and ADMIXTURE using genotype data from various intra-European panels and European-Americans. While the aforementioned commonly used tools performed poorly in inferring ancestry from a large number of subpopulations, AIPS accurately distinguished variations between and within subpopulations. Our results show that AIPS can be applied to large-scale data sets to discriminate the modest variability among intra-continental populations as well as for characterizing inter-continental variation. The method we developed will protect against spurious associations when mapping the genetic basis of a disease. Our approach is more accurate and computationally efficient method for inferring genetic ancestry in the large-scale genetic studies.

  1. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD

    Directory of Open Access Journals (Sweden)

    Gagan S Sidhu

    2012-11-01

    Full Text Available This article explores various preprocessing tools that select/create features to help a learner produce a classifier that can use fMRI data to effectively discriminate Attention-Deficit Hyperactivity Disorder (ADHD patients from healthy controls. We consider four different learning tasks: predicting either two (ADHD vs control or three classes (ADHD-1 vs ADHD-3 vs control, where each use either the imaging data only, or the phenotypic and imaging data. After averaging, BOLD-signal normalization, and masking of the fMRI images, we considered applying Fast Fourier Transform (FFT, possibly followed by some Principal Component Analysis (PCA variant (over time: PCA-t; over space and time: PCA-st or the kernelized variant, kPCA-st, to produce inputs to a learner, to determine which learned classifier performs the best – or at least better than the baseline of 64.2%, which is the proportion of the majority class (here, controls.In the two-class setting, PCA-t and PCA-st did not perform statistically better than baseline, whereas FFT and kPCA-st did (FFT, 68.4%; kPCA-st, 70.3%; when combined with the phenotypic data, which by itself produces 72.9% accuracy, all methods performed statistically better than the baseline, but none did better than using the phenotypic data. In the three-class setting, neither the PCA variants, or the phenotypic data classifiers, performed statistically better than the baseline.We next used the FFT output as input to the PCA variants. In the two-class setting, the PCA variants performed statistically better than the baseline using either the FFTed waveforms only (FFT+PCA-t, 69.6%,; FFT+PCA-st, 69.3% ; FFT+kPCA-st, 68.7%, or combining them with the phenotypic data (FFT+PCA-t, 70.6%; FFT+PCA-st, 70.6%; kPCA-st, 76%. In both settings, combining FFT+kPCA-st’s features with the phenotypic data was better than using only the phenotypic data, with the result in the two-class setting being statistically better.

  2. Effects of different components of serum after radiation, burn and combined radiation-burn injury on inward rectifier potassium channel of myocardial cells

    International Nuclear Information System (INIS)

    Ye Benlan; Cheng Tianmin; Xiao Jiasi

    1997-01-01

    Objective: To study the effects of different components of serum in rats inflicted with radiation, burn and combined radiation-burn injury on inward rectifier potassium channel of cultured myocardial cells. Method: Using patch clamp method to study the action of single ion channel. Results: The low molecular and lipid components of serum after different injuries models could all activate the inward rectifier potassium channel in cultured myocardial cells. The components of serum after combined radiation-burn injury showed the most significant effect, and the way of this effect was different from that from single injury. Conclusion: The serum components post injury altered the electric characteristic of myocardial cells, which may play a role in the combined effect of depressed cardiac function after combined radiation-burn injury

  3. Palm-vein classification based on principal orientation features.

    Directory of Open Access Journals (Sweden)

    Yujia Zhou

    Full Text Available Personal recognition using palm-vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm-vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm-vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm-vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm-vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm-vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database.

  4. Ceramic component with reinforced protection against radiations

    International Nuclear Information System (INIS)

    Dubuisson, J.; Laville, H.; Le Gal, P.

    1986-01-01

    Ceramic components hardened against radiations are claimed (for example capacitors or ceramic substrates for semiconductors). They are prepared with a sintered ceramic containing a high proportion of heavy atoms (for instance barium titanate and a bismuth salt) provided with a glass layer containing a high proportion of light atoms. The two materials are joined by vitrification producing a diffusion zone at the interface [fr

  5. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  6. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  7. Evaluation of Service Life of Polystyrene in Tropical Marine Environment by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Dongdong Song

    2015-01-01

    Full Text Available To predict the service life of polystyrene (PS under an aggressive environment, the nondimensional expression Z was established from a data set of multiple properties of PS by principal component analysis (PCA. In this study, PS specimens were exposed to the tropical environment on Xisha Islands in China for two years. Chromatic aberration, gloss, tensile strength, elongation at break, flexural strength, and impact strength were tested to evaluate the aging behavior of PS. Based on different needs of industries, each of the multiple properties could be used to evaluate the service life of PS. However, selecting a single performance variation will inevitably hide some information about the entire aging process. Therefore, finding a comprehensive measure representing the overall aging performance of PS can be highly significant. Herein, PCA was applied to obtain a specific property (Z which can represent all properties of PS. Z of PS degradation showed a slight decrease for the initial two months of exposure after which it increased rapidly in the next eight months. Subsequently, a slower increase of Z value was observed. From the three different stages shown as Z value increases, three stages have been identified for PS service life.

  8. Pulse generator circuit triggerable by nuclear radiation

    International Nuclear Information System (INIS)

    Fredrickson, P.B.

    1980-01-01

    A pulse generator circuit triggerable by a pulse of nuclear radiation is described. The pulse generator circuit includes a pair of transistors arranged, together with other electrical components, in the topology of a standard monostable multivibrator circuit. The circuit differs most significantly from a standard monostable multivibrator circuit in that the circuit is adapted to be triggered by a pulse of nuclear radiation rather than electrically and the transistors have substantially different sensitivities to radiation, due to different physical and electrical characteristics and parameters. One of the transistors is employed principally as a radiation detector and is in a normally non-conducting state and the other transistor is normally in a conducting state. When the circuit is exposed to a pulse of nuclear radiation, currents are induced in the collector-base junctions of both transistors but, due to the different radiation sensitivities of the transistors, the current induced in the collector-base junction of the radiation-detecting transistor is substantially greater than that induced in the collector-base junction of the other transistor. The pulse of radiation causes the radiation-detecting transistor to operate in its conducting state, causing the other transistor to operate in its non-conducting state. As the radiation-detecting transistor operates in its conducting state, an output signal is produced at an output terminal connected to the radiation-detecting transistor indicating the presence of a predetermined intensity of nuclear radiation

  9. Efficient mining of myxobacterial metabolite profiles enabled by liquid chromatography-electrospray ionisation-time-of-flight mass spectrometry and compound-based principal component analysis

    International Nuclear Information System (INIS)

    Krug, Daniel; Zurek, Gabriela; Schneider, Birgit; Garcia, Ronald; Mueller, Rolf

    2008-01-01

    Bacteria producing secondary metabolites are an important source of natural products with highly diverse structures and biological activities. Developing methods to efficiently mine procaryotic secondary metabolomes for the presence of potentially novel natural products is therefore of considerable interest. Modern mass spectrometry-coupled liquid chromatography can effectively capture microbial metabolic diversity with ever improving sensitivity and accuracy. In addition, computational and statistical tools increasingly enable the targeted analysis and exploration of information-rich LC-MS datasets. In this article, we describe the use of such techniques for the characterization of myxobacterial secondary metabolomes. Using accurate mass data from high-resolution ESI-TOF measurements, target screening has facilitated the rapid identification of known myxobacterial metabolites in extracts from nine Myxococcus species. Furthermore, principal component analysis (PCA), implementing an advanced compound-based bucketing approach, readily revealed the presence of further compounds which contribute to variation among the metabolite profiles under investigation. The generation of molecular formulae for putative novel compounds with high confidence due to evaluation of both exact mass position and isotopic pattern, is exemplified as an important key for de-replication and prioritization of candidates for further characterization

  10. PM10 and gaseous pollutants trends from air quality monitoring networks in Bari province: principal component analysis and absolute principal component scores on a two years and half data set

    Science.gov (United States)

    2014-01-01

    Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in

  11. An analysis of workers' morale in the coal mining industry using principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, J.; La Court, C.; Pearson, J.M.

    1987-03-01

    This paper looks at labour morale in the coal mining industry from 1967 to 1984. In particular it examines absenteeism, turnover and accidents over that period, as well as constructing an index of morale based on these variables. The data are taken from the North Nottinghamshire and South Yorkshire coal areas, and a comparison is made between these areas in the period leading up to the industrial action in 1984/85. The indices constructed indicate that morale, as measured by the first principal component, increased considerably during the years before the 1984 industrial dispute and that low morale was an unlikely reason for the dispute, although morale in South Yorkshire, a strike area, was lower than in North Nottinghamshire, largely a non-strike area. The steep rise in morale in both North Nottinghamshire and South Yorkshire follows closely the rise in unemployment nationally and may simply be an indication of conventional industrial relations assumptions that manifestations of negative worker attitudes are greatest when jobs are relatively plentiful, and considerably less so when jobs are scarce.

  12. Statistical techniques applied to aerial radiometric surveys (STAARS): principal components analysis user's manual

    International Nuclear Information System (INIS)

    Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.

    1981-01-01

    A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From this analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained

  13. Radiation mitigating properties of the lignan component in flaxseed

    International Nuclear Information System (INIS)

    Pietrofesa, Ralph; Christofidou-Solomidou, Melpo; Turowski, Jason; Tyagi, Sonia; Dukes, Floyd; Arguiri, Evguenia; Busch, Theresa M; Gallagher-Colombo, Shannon M; Solomides, Charalambos C; Cengel, Keith A

    2013-01-01

    Wholegrain flaxseed (FS), and its lignan component (FLC) consisting mainly of secoisolariciresinol diglucoside (SDG), have potent lung radioprotective properties while not abrogating the efficacy of radiotherapy. However, while the whole grain was recently shown to also have potent mitigating properties in a thoracic radiation pneumonopathy model, the bioactive component in the grain responsible for the mitigation of lung damage was never identified. Lungs may be exposed to radiation therapeutically for thoracic malignancies or incidentally following detonation of a radiological dispersion device. This could potentially lead to pulmonary inflammation, oxidative tissue injury, and fibrosis. This study aimed to evaluate the radiation mitigating effects of FLC in a mouse model of radiation pneumonopathy. We evaluated FLC-supplemented diets containing SDG lignan levels comparable to those in 10% and 20% whole grain diets. 10% or 20% FLC diets as compared to an isocaloric control diet (0% FLC) were given to mice (C57/BL6) (n=15-30 mice/group) at 24, 48, or 72-hours after single-dose (13.5 Gy) thoracic x-ray treatment (XRT). Mice were evaluated 4 months post-XRT for blood oxygenation, lung inflammation, fibrosis, cytokine and oxidative damage levels, and survival. FLC significantly mitigated radiation-related animal death. Specifically, mice fed 0% FLC demonstrated 36.7% survival 4 months post-XRT compared to 60–73.3% survival in mice fed 10%-20% FLC initiated 24–72 hours post-XRT. FLC also mitigated radiation-induced lung fibrosis whereby 10% FLC initiated 24-hours post-XRT significantly decreased fibrosis as compared to mice fed control diet while the corresponding TGF-beta1 levels detected immunohistochemically were also decreased. Additionally, 10-20% FLC initiated at any time point post radiation exposure, mitigated radiation-induced lung injury evidenced by decreased bronchoalveolar lavage (BAL) protein and inflammatory cytokine/chemokine release at 16 weeks

  14. MUSIC-CONTENT-ADAPTIVE ROBUST PRINCIPAL COMPONENT ANALYSIS FOR A SEMANTICALLY CONSISTENT SEPARATION OF FOREGROUND AND BACKGROUND IN MUSIC AUDIO SIGNALS

    OpenAIRE

    Papadopoulos , Hélène; Ellis , Daniel P.W.

    2014-01-01

    International audience; Robust Principal Component Analysis (RPCA) is a technique to decompose signals into sparse and low rank components, and has recently drawn the attention of the MIR field for the problem of separating leading vocals from accompaniment, with appealing re-sults obtained on small excerpts of music. However, the perfor-mance of the method drops when processing entire music tracks. We present an adaptive formulation of RPCA that incorporates music content information to guid...

  15. Portable XRF and principal component analysis for bill characterization in forensic science.

    Science.gov (United States)

    Appoloni, C R; Melquiades, F L

    2014-02-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  17. Location and characterisation of pollution sites by principal ...

    African Journals Online (AJOL)

    Location and characterisation of pollution sites by principal component analysis of trace contaminants in a slightly polluted seasonal river: a case study of the Arenales River (Salta, Argentina) ... Keywords: trace element contamination, water quality, principal component analysis, Arenales River, Salta, Argentina ...

  18. Proteome comparison for discrimination between honeydew and floral honeys from botanical species Mimosa scabrella Bentham by principal component analysis.

    Science.gov (United States)

    Azevedo, Mônia Stremel; Valentim-Neto, Pedro Alexandre; Seraglio, Siluana Katia Tischer; da Luz, Cynthia Fernandes Pinto; Arisi, Ana Carolina Maisonnave; Costa, Ana Carolina Oliveira

    2017-10-01

    Due to the increasing valuation and appreciation of honeydew honey in many European countries and also to existing contamination among different types of honeys, authentication is an important aspect of quality control with regard to guaranteeing the origin in terms of source (honeydew or floral) and needs to be determined. Furthermore, proteins are minor components of the honey, despite the importance of their physiological effects, and can differ according to the source of the honey. In this context, the aims of this study were to carry out protein extraction from honeydew and floral honeys and to discriminate these honeys from the same botanical species, Mimosa scabrella Bentham, through proteome comparison using two-dimensional gel electrophoresis and principal component analysis. The results showed that the proteome profile and principal component analysis can be a useful tool for discrimination between these types of honey using matched proteins (45 matched spots). Also, the proteome profile showed 160 protein spots in honeydew honey and 84 spots in the floral honey. The protein profile can be a differential characteristic of this type of honey, in view of the importance of proteins as bioactive compounds in honey. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  19. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  20. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  1. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  2. Fiber optic components compatibility with the PWR containment radiation field

    International Nuclear Information System (INIS)

    Breuze, G.; Serre, J.

    1990-01-01

    Present and future applications of fiber optics transmission in the nuclear industrial field are emphasized. Nuclear acceptance criteria for relevant electronic equipments in terms of radiation dose rate, integrated dose and required reliability are given. Ambient conditions of PWR containment are especially considered in the present paper. Experimental results of optical fibers and end-components exposed to 60 Co gamma rays are successively shown. Main radiation response characteristics up to 10 4 Gy (with dose rates of about 100 Gy.h -1 ) of both multimodal fiber families (step index and gradient index fibers) are compared. Predominant features of pure silica core fibers are: * an efficient photobleaching with near IR light from LED and LD commonly used in transmission data links, * a radiation hardening reducing induced losses down to 10 dB.km -1 in fine fibers up to date with latest developments. Dose rate effect on induced losses is also outlined for these fibers. Optoelectronic fiber-end components radiation response is good only for special LED (AsGa) and PD (Si). Radiation behavior of complex pigtailed LDM (laser diode + photodiode + Peltier element + thermistor) is not fully acceptable and technological improvements were made. Preliminary results are given. Two applications of fiber links transmitting data in a PWR containment and a hot cell are described. Hardening levels obtained and means required are given

  3. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  4. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  5. Laboratory spectroscopy of meteorite samples at UV-vis-NIR wavelengths: Analysis and discrimination by principal components analysis

    Science.gov (United States)

    Penttilä, Antti; Martikainen, Julia; Gritsevich, Maria; Muinonen, Karri

    2018-02-01

    Meteorite samples are measured with the University of Helsinki integrating-sphere UV-vis-NIR spectrometer. The resulting spectra of 30 meteorites are compared with selected spectra from the NASA Planetary Data System meteorite spectra database. The spectral measurements are transformed with the principal component analysis, and it is shown that different meteorite types can be distinguished from the transformed data. The motivation is to improve the link between asteroid spectral observations and meteorite spectral measurements.

  6. Current state of methodological and decisions for radiation treatment of blood, its components and products

    Directory of Open Access Journals (Sweden)

    Gordeev A.V.

    2014-12-01

    Full Text Available This article presents currently used blood transfusion media — components and blood products, therapeutic effects, reactions and complications of blood transfusion, use of radiation treatment for blood transfusion fluids. There had been discussed in detail the practice of radiation processing of blood components and for the prevention of reaction "graft versus host" and studies of plasma radiation treatment for its infectious safety. There was presented the current state of techniques and technical solutions of radiation treatment of transfusion-transmissible environments. There were also considered an alternative to radiation treatment of blood.

  7. Nonlinear Ultrasonic Techniques to Monitor Radiation Damage in RPV and Internal Components

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, Laurence [Georgia Inst. of Technology, Atlanta, GA (United States); Kim, Jin-Yeon [Georgia Inst. of Technology, Atlanta, GA (United States); Qu, Jisnmin [Northwestern Univ., Evanston, IL (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wall, Joe [Electric Power Research Inst. (EPRI), Knoxville, TN (United States)

    2015-11-02

    The objective of this research is to demonstrate that nonlinear ultrasonics (NLU) can be used to directly and quantitatively measure the remaining life in radiation damaged reactor pressure vessel (RPV) and internal components. Specific damage types to be monitored are irradiation embrittlement and irradiation assisted stress corrosion cracking (IASCC). Our vision is to develop a technique that allows operators to assess damage by making a limited number of NLU measurements in strategically selected critical reactor components during regularly scheduled outages. This measured data can then be used to determine the current condition of these key components, from which remaining useful life can be predicted. Methods to unambiguously characterize radiation related damage in reactor internals and RPVs remain elusive. NLU technology has demonstrated great potential to be used as a material sensor – a sensor that can continuously monitor a material’s damage state. The physical effect being monitored by NLU is the generation of higher harmonic frequencies in an initially monochromatic ultrasonic wave. The degree of nonlinearity is quantified with the acoustic nonlinearity parameter, β, which is an absolute, measurable material constant. Recent research has demonstrated that nonlinear ultrasound can be used to characterize material state and changes in microscale characteristics such as internal stress states, precipitate formation and dislocation densities. Radiation damage reduces the fracture toughness of RPV steels and internals, and can leave them susceptible to IASCC, which may in turn limit the lifetimes of some operating reactors. The ability to characterize radiation damage in the RPV and internals will enable nuclear operators to set operation time thresholds for vessels and prescribe and schedule replacement activities for core internals. Such a capability will allow a more clear definition of reactor safety margins. The research consists of three tasks: (1

  8. Nonlinear Ultrasonic Techniques to Monitor Radiation Damage in RPV and Internal Components

    International Nuclear Information System (INIS)

    Jacobs, Laurence; Kim, Jin-Yeon; Qu, Jisnmin; Ramuhalli, Pradeep; Wall, Joe

    2015-01-01

    The objective of this research is to demonstrate that nonlinear ultrasonics (NLU) can be used to directly and quantitatively measure the remaining life in radiation damaged reactor pressure vessel (RPV) and internal components. Specific damage types to be monitored are irradiation embrittlement and irradiation assisted stress corrosion cracking (IASCC). Our vision is to develop a technique that allows operators to assess damage by making a limited number of NLU measurements in strategically selected critical reactor components during regularly scheduled outages. This measured data can then be used to determine the current condition of these key components, from which remaining useful life can be predicted. Methods to unambiguously characterize radiation related damage in reactor internals and RPVs remain elusive. NLU technology has demonstrated great potential to be used as a material sensor - a sensor that can continuously monitor a material's damage state. The physical effect being monitored by NLU is the generation of higher harmonic frequencies in an initially monochromatic ultrasonic wave. The degree of nonlinearity is quantified with the acoustic nonlinearity parameter, β, which is an absolute, measurable material constant. Recent research has demonstrated that nonlinear ultrasound can be used to characterize material state and changes in microscale characteristics such as internal stress states, precipitate formation and dislocation densities. Radiation damage reduces the fracture toughness of RPV steels and internals, and can leave them susceptible to IASCC, which may in turn limit the lifetimes of some operating reactors. The ability to characterize radiation damage in the RPV and internals will enable nuclear operators to set operation time thresholds for vessels and prescribe and schedule replacement activities for core internals. Such a capability will allow a more clear definition of reactor safety margins. The research consists of three tasks

  9. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

    1994-01-01

    Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

  10. Principal Component Analysis of Chinese Porcelains from the Five Dynasties to the Qing Dynasty

    Science.gov (United States)

    Yap, C. T.; Hua, Younan

    1992-10-01

    This is a study of the possibility of identifying antique Chinese porcelains according to the period or dynasty, using major and minor chemical components (SiO2 , Al2O3 , Fe2O3 , K2O, Na2O, CaO and MgO) from the body of the porcelain. Principal component analysis is applied to published data on 66 pieces of Chinese procelains made in Jingdezhen during the Five Dynasties and the Song, Yuan, Ming and Qing Dynasties. It is shown that porcelains made during the Five Dynasties and the Yuan (or Ming) and Qing Dynasties can be segregated completely without any overlap. However, there is appreciable overlap between the Five Dynasties and the Song Dynasty, some overlap between the Song and Ming Dynasties and also between the Yuan and Ming Dynasties. Interestingly, Qing procelains are well separated from all the others. The percentage of silica in the porcelain body decreases and that of alumina increases with recentness with the exception of the Yuan and Ming Dynasties, where this trend is reversed.

  11. Validation of the academic management evaluation instru-ment based on principal component analysis for engineering and technological courses

    Directory of Open Access Journals (Sweden)

    Albano Oliveira Nunes

    2015-05-01

    Full Text Available In recent years, the expansion of higher education in Brazil has led to a series of demands related to aspects concerning training at the college level. These processes relate to: academics, professionals, entering in the labor market, among others. In this context, an important aspect is the quality of the courses. Thus, the evaluation becomes a critical diagnostic process of reality and starting point for possible interventions to be put in practice by the coordinators of the programs. This article presents the results of a questionnaire administered at the Federal University of Ceará (UFC, especially to  Systems & Digital Media and Engineering Programs professors. This research aims to identify how the professors from each department see the administrative procedures developed by the departments and also investigate the possibility of using Principal Components Analysis (PCA as a support for management of the higher education training. The methodology included the implementation of Likert scale questionnaire and subsequent mathematical treatment with PCA. The results indicate the potential application of PCA to support the management of higher education; it was possible to extract preliminaries inferences related to management methods and their characteristics. This suggests the possibility of developing the Educametrics field.

  12. Use of Geochemistry Data Collected by the Mars Exploration Rover Spirit in Gusev Crater to Teach Geomorphic Zonation through Principal Components Analysis

    Science.gov (United States)

    Rodrigue, Christine M.

    2011-01-01

    This paper presents a laboratory exercise used to teach principal components analysis (PCA) as a means of surface zonation. The lab was built around abundance data for 16 oxides and elements collected by the Mars Exploration Rover Spirit in Gusev Crater between Sol 14 and Sol 470. Students used PCA to reduce 15 of these into 3 components, which,…

  13. Characteristics of outage radiation fields around various reactor components

    International Nuclear Information System (INIS)

    Verzilov, Y.; Husain, A.; Corbin, G.

    2008-01-01

    Full text: Activity monitoring surveys, consisting of gamma spectroscopy and dose rate measurements, of various CANDU station components such as the reactor face, feeder cabinet, steam generators and moderator heat exchangers are often performed during shutdown in order to trend the transport of activity around the primary heat transport and moderator systems. Recently, the increased dose expenditure for work such as feeder inspection and replacement in the reactor vault has also spurred interest in improved characterization of the reactor face fields to facilitate better ALARA decision making and hence a reduction in future dose expenditures. At present, planning for reactor face work is hampered by insufficient understanding of the relative contribution of the various components to the overall dose. In addition to the increased dose expenditure for work at the reactor face, maintenance work associated with horizontal flux detectors and liquid injection systems has also resulted in elevated dose expenditures. For instance at Darlington, radiation fields in the vicinity of horizontal flux detectors (HFD) and Liquid Injection Shutdown System (LISS) nozzle bellows are trending upwards with present contact fields being in the range 16-70 rem/h and working distance fields being in the range 100-500 mrem/h. This paper presents findings based on work currently being funded by the CANDU Owners Group. Measurements were performed at Ontario Power Generation's Pickering and Darlington nuclear stations. Specifically, the following are addressed: Characteristics of Reactor Vault Fields; Characteristics of Steam Generator Fields; Characteristics of Moderator Heat Exchanger Fields. Measurements in the reactor vault were performed at the reactor face, along the length of end fittings, along the length of feeders, at the bleed condenser and at the HFD and LISS nozzle bellows. Steam generator fields were characterized at various elevations above the tube sheet, with and without the

  14. State and group dynamics of world stock market by principal component analysis

    Science.gov (United States)

    Nobi, Ashadun; Lee, Jae Woo

    2016-05-01

    We study the dynamic interactions and structural changes by a principal component analysis (PCA) to cross-correlation coefficients of global financial indices in the years 1998-2012. The variances explained by the first PC increase with time and show a drastic change during the crisis. A sharp change in PC coefficient implies a transition of market state, a situation which occurs frequently in the American and Asian indices. However, the European indices remain stable over time. Using the first two PC coefficients, we identify indices that are similar and more strongly correlated than the others. We observe that the European indices form a robust group over the observation period. The dynamics of the individual indices within the group increase in similarity with time, and the dynamics of indices are more similar during the crises. Furthermore, the group formation of indices changes position in two-dimensional spaces due to crises. Finally, after a financial crisis, the difference of PCs between the European and American indices narrows.

  15. Application of Principal Component Analysis in Assessment of Relation Between the Parameters of Technological Quality of Wheat Grains Treated with Inert Dusts Against Rice Weevil (Sitophilus oryzae L.

    Directory of Open Access Journals (Sweden)

    Marija Bodroža-Solarov

    2011-01-01

    Full Text Available Quality parameters of several wheat grain lots (low vitreous and high vitreous grains,non-infested and infested with rice weevils, (Sitophilus oryzae L. treated with inert dusts(natural zeolite, two diatomaceous earths originating from Serbia and a commercial productProtect-It® were investigated. Principal component analysis (PCA was used to investigatethe classification of treated grain lots and to assess how attributes of technological qualitycontribute to this classification. This research showed that vitreousness (0.95 and test weight(0.93 contributed most to the first principal component whereas extensigraph area (-0.76contributed to the second component. The determined accountability of the total variabilityby the first component was around 55%, while with the second it was 18%, which meansthat those two dimensions together account for around 70% of total variability of the observedset of variables. Principal component analysis (PCA of data set was able to distinguishamong the various treatments of wheat lots. It was revealed that inert dust treatments producedifferent effects depending on the degree of endosperm vitreousness.

  16. Informing principal policy reforms in South Africa through data-based evidence

    OpenAIRE

    Gabrielle Wills

    2015-01-01

    In the past decade there has been a notable shift in South African education policy that raises the value of school leadership as a lever for learning improvements. Despite a growing discourse on school leadership, there has been a lack of empirical based evidence on principals to inform, validate or debate the efficacy of proposed policies in raising the calibre of school principals. Drawing on findings from a larger study to understand the labour market for school principals in South Africa...

  17. Radiation Effects and Component Hardening testing program at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Draper, J.V.; Weil, B.S.; Chesser, J.B.

    1993-01-01

    This paper describes Phase II of the Radiation Effects and Component Hardening (REACH) testing program, performed as part of the joint collaborative agreement between the United States Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan, Components and materials were submitted to 10 5 R/hr gamma radiation fields for 10,000 hr, producing accumulated doses of 10 9 R; most performed as expected

  18. Diagnose Test-Taker's Profile in Terms of Core Profile Patterns: Principal Component (PC) vs. Profile Analysis via MDS (PAMS) Approaches.

    Science.gov (United States)

    Kim, Se-Kang; Davison, Mark L.

    A study was conducted to examine how principal components analysis (PCA) and Profile Analysis via Multidimensional Scaling (PAMS) can be used to diagnose individuals observed score profiles in terms of core profile patterns identified by each method. The standardization sample from the Wechsler Intelligence Scale for Children, Third Edition…

  19. CLASSIFICATION OF LIDAR DATA OVER BUILDING ROOFS USING K-MEANS AND PRINCIPAL COMPONENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renato César dos Santos

    Full Text Available Abstract: The classification is an important step in the extraction of geometric primitives from LiDAR data. Normally, it is applied for the identification of points sampled on geometric primitives of interest. In the literature there are several studies that have explored the use of eigenvalues to classify LiDAR points into different classes or structures, such as corner, edge, and plane. However, in some works the classes are defined considering an ideal geometry, which can be affected by the inadequate sampling and/or by the presence of noise when using real data. To overcome this limitation, in this paper is proposed the use of metrics based on eigenvalues and the k-means method to carry out the classification. So, the concept of principal component analysis is used to obtain the eigenvalues and the derived metrics, while the k-means is applied to cluster the roof points in two classes: edge and non-edge. To evaluate the proposed method four test areas with different levels of complexity were selected. From the qualitative and quantitative analyses, it could be concluded that the proposed classification procedure gave satisfactory results, resulting in completeness and correctness above 92% for the non-edge class, and between 61% to 98% for the edge class.

  20. Performance assessment of air quality monitoring networks using principal component analysis and cluster analysis

    International Nuclear Information System (INIS)

    Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun

    2011-01-01

    This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)

  1. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    Science.gov (United States)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  2. Damage detection methodology under variable load conditions based on strain field pattern recognition using FBGs, nonlinear principal component analysis, and clustering techniques

    Science.gov (United States)

    Sierra-Pérez, Julián; Torres-Arredondo, M.-A.; Alvarez-Montoya, Joham

    2018-01-01

    Structural health monitoring consists of using sensors integrated within structures together with algorithms to perform load monitoring, damage detection, damage location, damage size and severity, and prognosis. One possibility is to use strain sensors to infer structural integrity by comparing patterns in the strain field between the pristine and damaged conditions. In previous works, the authors have demonstrated that it is possible to detect small defects based on strain field pattern recognition by using robust machine learning techniques. They have focused on methodologies based on principal component analysis (PCA) and on the development of several unfolding and standardization techniques, which allow dealing with multiple load conditions. However, before a real implementation of this approach in engineering structures, changes in the strain field due to conditions different from damage occurrence need to be isolated. Since load conditions may vary in most engineering structures and promote significant changes in the strain field, it is necessary to implement novel techniques for uncoupling such changes from those produced by damage occurrence. A damage detection methodology based on optimal baseline selection (OBS) by means of clustering techniques is presented. The methodology includes the use of hierarchical nonlinear PCA as a nonlinear modeling technique in conjunction with Q and nonlinear-T 2 damage indices. The methodology is experimentally validated using strain measurements obtained by 32 fiber Bragg grating sensors bonded to an aluminum beam under dynamic bending loads and simultaneously submitted to variations in its pitch angle. The results demonstrated the capability of the methodology for clustering data according to 13 different load conditions (pitch angles), performing the OBS and detecting six different damages induced in a cumulative way. The proposed methodology showed a true positive rate of 100% and a false positive rate of 1.28% for a

  3. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  4. Three-Dimensional X-Ray Photoelectron Tomography on the Nanoscale: Limits of Data Processing by Principal Component Analysis

    DEFF Research Database (Denmark)

    Hajati, S.; Walton, J.; Tougaard, S.

    2013-01-01

    In a previous article, we studied the influence of spectral noise on a new method for three-dimensional X-ray photoelectron spectroscopy (3D XPS) imaging, which is based on analysis of the XPS peak shape [Hajati, S., Tougaard, S., Walton, J. & Fairley, N. (2008). Surf Sci 602, 3064-3070]. Here, we...... study in more detail the influence of noise reduction by principal component analysis (PCA) on 3D XPS images of carbon contamination of a patterned oxidized silicon sample and on 3D XPS images of Ag covered by a nanoscale patterned octadiene layer. PCA is very efficient for noise reduction, and using...... acquisition time. A small additional amount of information is obtained by using up to five PCA factors, but due to the increased noise level, this information can only be extracted if the intensity of the start and end points for each spectrum are obtained as averages over several energy points....

  5. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    Science.gov (United States)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  6. Sensor and method for measurment of select components of a material based on detection of radiation after interaction with the material

    International Nuclear Information System (INIS)

    Chase, L.M.; Anderson, L.M.; Norton, M.K.

    1993-01-01

    A sensor is described for measuring one or more select components of a sheet, comprising: a radiation source for emitting radiation toward the sheet; a plurality of detecting means, wherein at least one detecting means is offset from the source, for detecting radiation after interaction with the sheet; means for directing the radiation so that the radiation makes multiple interactions with the sheet in moving from the source to the detecting means, wherein the directing means includes a first reflector and second reflector defining a sheet space for the sheet to occupy; means for computing a ratio of the intensity of the detected radiation when the sheet is absent from the sheet space and the intensity of the detected radiation when the sheet occupies the sheet space; and means for computing the absorption power of the sheet from the intensity of the detected radiation

  7. Effect of solar radiation on the functional components of mulberry (Morus alba L.) leaves.

    Science.gov (United States)

    Sugiyama, Mari; Katsube, Takuya; Koyama, Akio; Itamura, Hiroyuki

    2016-08-01

    The functional components of mulberry leaves have attracted the attention of the health food industry, and increasing their concentrations is an industry goal. This study investigated the effects of solar radiation, which may influence the production of flavonol and 1-deoxynojirimycin (DNJ) functional components in mulberry leaves, by comparing a greenhouse (poor solar radiation) and outdoor (rich solar radiation) setting. The level of flavonol in leaves cultivated in the greenhouse was markedly decreased when compared with those cultivated outdoors. In contrast, the DNJ content in greenhouse-cultivated plants was increased only slightly when compared with those cultivated outdoors. Interestingly, the flavonol content was markedly increased in the upper leaves of mulberry trees that were transferred from a greenhouse to the outdoors compared with those cultivated only in the outdoors. Solar radiation conditions influence the synthesis of flavonol and DNJ, the functional components of mulberry leaves. Under high solar radiation, the flavonol level becomes very high but the DNJ level becomes slightly lower, suggesting that the impact of solar radiation is great on flavonol but small on DNJ synthesis. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. Using the Cluster Analysis and the Principal Component Analysis in Evaluating the Quality of a Destination

    Directory of Open Access Journals (Sweden)

    Ida Vajčnerová

    2016-01-01

    Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.

  9. Application of Principal Component Analysis (PCA) to Reduce Multicollinearity Exchange Rate Currency of Some Countries in Asia Period 2004-2014

    Science.gov (United States)

    Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad

    2017-01-01

    This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…

  10. Prediction of sensitivity to gefitinib/erlotinib for EGFR mutations in NSCLC based on structural interaction fingerprints and multilinear principal component analysis.

    Science.gov (United States)

    Zou, Bin; Lee, Victor H F; Yan, Hong

    2018-03-07

    Non-small cell lung cancer (NSCLC) with activating EGFR mutations, especially exon 19 deletions and the L858R point mutation, is particularly responsive to gefitinib and erlotinib. However, the sensitivity varies for less common and rare EGFR mutations. There are various explanations for the low sensitivity of EGFR exon 20 insertions and the exon 20 T790 M point mutation to gefitinib/erlotinib. However, few studies discuss, from a structural perspective, why less common mutations, like G719X and L861Q, have moderate sensitivity to gefitinib/erlotinib. To decode the drug sensitivity/selectivity of EGFR mutants, it is important to analyze the interaction between EGFR mutants and EGFR inhibitors. In this paper, the 30 most common EGFR mutants were selected and the technique of protein-ligand interaction fingerprint (IFP) was applied to analyze and compare the binding modes of EGFR mutant-gefitinib/erlotinib complexes. Molecular dynamics simulations were employed to obtain the dynamic trajectory and a matrix of IFPs for each EGFR mutant-inhibitor complex. Multilinear Principal Component Analysis (MPCA) was applied for dimensionality reduction and feature selection. The selected features were further analyzed for use as a drug sensitivity predictor. The results showed that the accuracy of prediction of drug sensitivity was very high for both gefitinib and erlotinib. Targeted Projection Pursuit (TPP) was used to show that the data points can be easily separated based on their sensitivities to gefetinib/erlotinib. We can conclude that the IFP features of EGFR mutant-TKI complexes and the MPCA-based tensor object feature extraction are useful to predict the drug sensitivity of EGFR mutants. The findings provide new insights for studying and predicting drug resistance/sensitivity of EGFR mutations in NSCLC and can be beneficial to the design of future targeted therapies and innovative drug discovery.

  11. In-service inspection of electronics components, circuits and nuclear radiation detectors

    International Nuclear Information System (INIS)

    Darbhe, M.D.

    2002-01-01

    A nuclear reactor is a complex process plant. Like a nuclear power plant, the research reactors also employ various nuclear and process systems, the scope and number of such systems being plant-specific. In-service inspection of these systems is an important requirement and is applied at various levels of their constituent units such as detectors, electronics components, circuits and integrated systems. The sensors used cover a wide range such as neutronic, radiation, process (pressure, temperature, flow, level) and many others. The present discussion is limited to neutronic and radiation detectors. The electronic components used normally consist of passive components like resistors, capacitors, semiconductor components like diodes, transistors, analog integrated circuits and digital integrated circuits and electromagnetic relays, to name a few. In order to have a comprehensive surveillance and ISI plan, over the entire plant life, it is necessary to understand various mechanisms, which degrade the performance of these systems. These are discussed initially and later various ISI methods that are used on component-circuit or system level, to ensure optimum system performance, are discussed. The computerised systems, because of hardware and software considerations, have to be given special attention, and the same are discussed briefly

  12. Improving power output of inertial energy harvesters by employing principal component analysis of input acceleration

    Science.gov (United States)

    Smilek, Jan; Hadas, Zdenek

    2017-02-01

    In this paper we propose the use of principal component analysis to process the measured acceleration data in order to determine the direction of acceleration with the highest variance on given frequency of interest. This method can be used for improving the power generated by inertial energy harvesters. Their power output is highly dependent on the excitation acceleration magnitude and frequency, but the axes of acceleration measurements might not always be perfectly aligned with the directions of movement, and therefore the generated power output might be severely underestimated in simulations, possibly leading to false conclusions about the feasibility of using the inertial energy harvester for the examined application.

  13. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    Science.gov (United States)

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  14. Surface erosion of fusion reactor components due to radiation blistering and neutron sputtering

    International Nuclear Information System (INIS)

    Das, S.K.; Kaminsky, M.

    1975-01-01

    Radiation blistering and neutron sputtering can lead to the surface erosion of fusion reactor components exposed to plasma radiations. Recent studies of methods to reduce the surface erosion caused by these processes are discussed

  15. Assessment of Radio-Frequency Radiation Exposure Level from Selected Mobile Base Stations (MBS) in Lokoja, Kogi State, Nigeria

    OpenAIRE

    Nwankwo, U. J. Victor; Jibiri, N. N.; Dada, S. S.; Onugba, A. A.; Ushie, P.

    2012-01-01

    The acquisition and use of mobile phone is tremendously increasing especially in developing countries, but not without a concern. The greater concern among the public is principally over the proximity of mobile base stations (MBS) to residential areas rather than the use of handsets. In this paper, we present an assessment of Radio-Frequency (RF) radiation exposure level measurements and analysis of radiation power density (in W/sq m) from mobile base stations relative to radial distance (in ...

  16. Principal component analysis of air particulate data from the industrial area of islamabad, pakistan

    International Nuclear Information System (INIS)

    Waheed, S.; Siddique, N.; Daud, M.

    2008-01-01

    A Gent air sampler was used to collect 72 pairs of size fractionated coarse and fine (PM/sub 10/ and PM/sub 2.5/) particulate mass samples from the industrial zone (sector I-9) of Islamabad. These samples were analyzed for their elemental composition using Instrumental Neutron Activation Analysis (INAA). Principal component analysis (PCA), which can be used for source apportionment of quantified elemental data, was used to interpret the data. Graphical representations of loadings were used to explain the data through grouping of the elements from same source. The present work shows well defined elemental fingerprints of suspended soil and road dust, industry, motor vehicle exhaust and tyres, and coal and refuses combustions for the studied locality of Islamabad. (author)

  17. Support vector machine and principal component analysis for microarray data classification

    Science.gov (United States)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  18. The Psychometric Assessment of Children with Learning Disabilities: An Index Derived from a Principal Components Analysis of the WISC-R.

    Science.gov (United States)

    Lawson, J. S.; Inglis, James

    1984-01-01

    A learning disability index (LDI) for the assessment of intellectual deficits on the Wechsler Intelligence Scale for Children-Revised (WISC-R) is described. The Factor II score coefficients derived from an unrotated principal components analysis of the WISC-R normative data, in combination with the individual's scaled scores, are used for this…

  19. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ani Shabri

    2014-01-01

    Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  20. The Effects Radiation on Cellular Components of the Immune

    International Nuclear Information System (INIS)

    Zubaidah-Alatas

    2001-01-01

    The immune system describes the body's ability to defend itself against various foreign intruders named as antigens by calling on an immune mechanism. Antigens penetration into body activate the body's immune system that may be humoral response, cellular response, or both. The immune response is primarily mediated by two cell types, lymphocyte and macrophage. This paper will discuss the cellular component of immune system and the radiation effects on various cells involved in system. Moreover, the effects of radiation on humoral and cellular responses and the relation among immunity, cancer and radiotherapy are also described. (author)