WorldWideScience

Sample records for coordinative component analysis

  1. Knowledge-guided gene ranking by coordinative component analysis.

    Science.gov (United States)

    Wang, Chen; Xuan, Jianhua; Li, Huai; Wang, Yue; Zhan, Ming; Hoffman, Eric P; Clarke, Robert

    2010-03-30

    In cancer, gene networks and pathways often exhibit dynamic behavior, particularly during the process of carcinogenesis. Thus, it is important to prioritize those genes that are strongly associated with the functionality of a network. Traditional statistical methods are often inept to identify biologically relevant member genes, motivating researchers to incorporate biological knowledge into gene ranking methods. However, current integration strategies are often heuristic and fail to incorporate fully the true interplay between biological knowledge and gene expression data. To improve knowledge-guided gene ranking, we propose a novel method called coordinative component analysis (COCA) in this paper. COCA explicitly captures those genes within a specific biological context that are likely to be expressed in a coordinative manner. Formulated as an optimization problem to maximize the coordinative effort, COCA is designed to first extract the coordinative components based on a partial guidance from knowledge genes and then rank the genes according to their participation strengths. An embedded bootstrapping procedure is implemented to improve statistical robustness of the solutions. COCA was initially tested on simulation data and then on published gene expression microarray data to demonstrate its improved performance as compared to traditional statistical methods. Finally, the COCA approach has been applied to stem cell data to identify biologically relevant genes in signaling pathways. As a result, the COCA approach uncovers novel pathway members that may shed light into the pathway deregulation in cancers. We have developed a new integrative strategy to combine biological knowledge and microarray data for gene ranking. The method utilizes knowledge genes for a guidance to first extract coordinative components, and then rank the genes according to their contribution related to a network or pathway. The experimental results show that such a knowledge-guided strategy

  2. Identifying coordinative structure using principal component analysis based on coherence derived from linear systems analysis.

    Science.gov (United States)

    Wang, Xinguang; O'Dwyer, Nicholas; Halaki, Mark; Smith, Richard

    2013-01-01

    Principal component analysis is a powerful and popular technique for capturing redundancy in muscle activity and kinematic patterns. A primary limitation of the correlations or covariances between signals on which this analysis is based is that they do not account for dynamic relations between signals, yet such relations-such as that between neural drive and muscle tension-are widespread in the sensorimotor system. Low correlations may thus be obtained and signals may appear independent despite a dynamic linear relation between them. To address this limitation, linear systems analysis can be used to calculate the matrix of overall coherences between signals, which measures the strength of the relation between signals taking dynamic relations into account. Using ankle, knee, and hip sagittal-plane angles from 6 healthy subjects during ~50% of total variance in the data set, while with overall coherence matrices the first component accounted for > 95% of total variance. The results demonstrate that the dimensionality of the coordinative structure can be overestimated using conventional correlation, whereas a more parsimonious structure is identified with overall coherence.

  3. A review on the coordinative structure of human walking and the application of principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Xinguang Wang; Nicholas O'Dwyer; Mark Halaki

    2013-01-01

    Walking is a complex task which includes hundreds of muscles, bones and joints working together to deliver smooth movements. With the complexity, walking has been widely investigated in order to identify the pattern of multi-segment movement and reveal the control mechanism. The degree of freedom and dimensional properties provide a view of the coordinative structure during walking, which has been extensively studied by using dimension reduction technique. In this paper, the studies related to the coordinative structure, dimensions detection and pattern reorganization during walking have been reviewed. Principal component analysis, as a popular technique, is widely used in the processing of human movement data. Both the principle and the outcomes of principal component analysis were introduced in this paper. This technique has been reported to successfully reduce the redundancy within the original data, identify the physical meaning represented by the extracted principal components and discriminate the different patterns. The coordinative structure during walking assessed by this technique could provide further information of the body control mechanism and correlate walking pattern with injury.

  4. GNSS Vertical Coordinate Time Series Analysis Using Single-Channel Independent Component Analysis Method

    Science.gov (United States)

    Peng, Wei; Dai, Wujiao; Santerre, Rock; Cai, Changsheng; Kuang, Cuilin

    2017-02-01

    Daily vertical coordinate time series of Global Navigation Satellite System (GNSS) stations usually contains tectonic and non-tectonic deformation signals, residual atmospheric delay signals, measurement noise, etc. In geophysical studies, it is very important to separate various geophysical signals from the GNSS time series to truthfully reflect the effect of mass loadings on crustal deformation. Based on the independence of mass loadings, we combine the Ensemble Empirical Mode Decomposition (EEMD) with the Phase Space Reconstruction-based Independent Component Analysis (PSR-ICA) method to analyze the vertical time series of GNSS reference stations. In the simulation experiment, the seasonal non-tectonic signal is simulated by the sum of the correction of atmospheric mass loading and soil moisture mass loading. The simulated seasonal non-tectonic signal can be separated into two independent signals using the PSR-ICA method, which strongly correlated with atmospheric mass loading and soil moisture mass loading, respectively. Likewise, in the analysis of the vertical time series of GNSS reference stations of Crustal Movement Observation Network of China (CMONOC), similar results have been obtained using the combined EEMD and PSR-ICA method. All these results indicate that the EEMD and PSR-ICA method can effectively separate the independent atmospheric and soil moisture mass loading signals and illustrate the significant cause of the seasonal variation of GNSS vertical time series in the mainland of China.

  5. A coordination language for mobile components

    NARCIS (Netherlands)

    Arbab, F.; Bonsangue, M.M.; Boer, F.S. de

    1999-01-01

    Abstract In this paper we present the sigmapi coordination language, a core language for specifying dynamic networks of components. The language is inspired by the Manifold coordination language and by the pi-calculus. The main concepts of the language are components, classes, objects

  6. Harmonic Vibrational Analysis in Delocalized Internal Coordinates.

    Science.gov (United States)

    Jensen, Frank; Palmer, David S

    2011-01-11

    It is shown that a principal component analysis of a large set of internal coordinates can be used to define a nonredundant set of delocalized internal coordinates suitable for the calculation of harmonic vibrational normal modes. The selection of internal coordinates and the principal component analysis provide large degrees of freedom in extracting a nonredundant set of coordinates, and thus influence how the vibrational normal modes are described. It is shown that long-range coordinates may be especially suitable for describing low-frequency global deformation modes in proteins.

  7. Analysis of Heme Iron Coordination in DGCR8: The Heme-Binding Component of the Microprocessor Complex.

    Science.gov (United States)

    Girvan, Hazel M; Bradley, Justin M; Cheesman, Myles R; Kincaid, James R; Liu, Yilin; Czarnecki, Kazimierz; Fisher, Karl; Leys, David; Rigby, Stephen E J; Munro, Andrew W

    2016-09-13

    DGCR8 is the RNA-binding partner of the nuclease Drosha. Their complex (the "Microprocessor") is essential for processing of long, primary microRNAs (pri-miRNAs) in the nucleus. Binding of heme to DGCR8 is essential for pri-miRNA processing. On the basis of the split Soret ultraviolet-visible (UV-vis) spectrum of ferric DGCR8, bis-thiolate sulfur (cysteinate, Cys(-)) heme iron coordination of DGCR8 heme iron was proposed. We have characterized DGCR8 heme ligation using the Δ276 DGCR8 variant and combined electron paramagnetic resonance (EPR), magnetic circular dichroism (MCD), electron nuclear double resonance, resonance Raman, and electronic absorption spectroscopy. These studies indicate DGCR8 bis-Cys heme iron ligation, with conversion from bis-thiolate (Cys(-)/Cys(-)) axial coordination in ferric DGCR8 to bis-thiol (CysH/CysH) coordination in ferrous DGCR8. Pri-miRNA binding does not perturb ferric DGCR8's optical spectrum, consistent with the axial ligand environment being separated from the substrate-binding site. UV-vis absorption spectra of the Fe(II) and Fe(II)-CO forms indicate discrete species exhibiting peaks with absorption coefficients substantially larger than those for ferric DGCR8 and that previously reported for a ferrous form of DGCR8. Electron-nuclear double resonance spectroscopy data exclude histidine or water as axial ligands for ferric DGCR8 and favor bis-thiolate coordination in this form. UV-vis MCD and near-infrared MCD provide data consistent with this conclusion. UV-vis MCD data for ferrous DGCR8 reveal features consistent with bis-thiol heme iron coordination, and resonance Raman data for the ferrous-CO form are consistent with a thiol ligand trans to the CO. These studies support retention of DGCR8 cysteine coordination upon reduction, a conclusion distinct from those of previous studies of a different ferrous DGCR8 isoform.

  8. 基于PCA和平行坐标的高维数据可视化%High-dimensional Data Visualization Based on Principal Component Analysis and Parallel Coordinate

    Institute of Scientific and Technical Information of China (English)

    雷君虎; 杨家红; 钟坚成; 王苏卫

    2011-01-01

    Parallel coordinates can be used in high-dimensional data visualization, but when the data dimension to be displayed is too large, visual clutter may occur.This paper proposes a data visualization method named PPCP, which combines Principal Component Analysis(PCA) and parallel coordinate.PCA is used for effective dimension reduction on high-dimensional data, and the processed data are displayed in the way of parallel coordinate visualization.Experimental results show that it is effective to reveal the relationships among high-dimensional data.%将平行坐标用于高维数据的可视化时,如果要展示的数据维太多,会发生可视化混乱.针对上述问题,提出一种结合主成分分析(PCA)和平行坐标的数据可视化方法PPCP.利用PCA方法对高维数据进行有效的降维处理,将降维后的数据进行平行坐标可视化展示.实验结果证明,该方法能有效地揭示高维数据之间的关系.

  9. Investigation on Tidal Components in GPS Coordinates

    Science.gov (United States)

    Araszkiewicz, Andrzej; Bogusz, Janusz; Figurski, Mariusz

    2009-01-01

    This paper presents analyses on the GPS coordinates from sub-diurnal solutions of EPN data provided by Warsaw Military University of Technology. The aim of this research is to investigate the way the tidal models used in Bernese software (solid Earth and ocean tides as well) fit to the individual conditions of EPN stations. The 1-hour solution technique of GPS data processing was utilized to obtain coordinates of above 70 EPN stations. Additionally several Polish permanent sites with clearly seen oscillations were examined. This processing technique allowed us to recognize diurnal and sub-diurnal residual oscillations which could be next utilized for validation of the tidal models.

  10. Reo: A Channel-based Coordination Model for Component Composition

    NARCIS (Netherlands)

    Arbab, F.

    2004-01-01

    In this paper, we present Reo, which forms a paradigm for composition of software components based on the notion of mobile channels. Reo is a channel-based exogenous coordination model in which complex coordinators, called connectors, are compositionally built out of simpler ones. The simplest conne

  11. A channel-based coordination model for component composition

    NARCIS (Netherlands)

    Arbab, F.

    2002-01-01

    In this paper, we present $P epsilon omega$, a paradigm for composition of software components based on the notion of mobile channels. $P repsilon omega$ is a channel-based exogenous coordination model wherein complex coordinators, called {em connectors are compositionally built out of simpler ones.

  12. Ellipsoidal analysis of coordination polyhedra

    Science.gov (United States)

    Cumby, James; Attfield, J. Paul

    2017-02-01

    The idea of the coordination polyhedron is essential to understanding chemical structure. Simple polyhedra in crystalline compounds are often deformed due to structural complexity or electronic instabilities so distortion analysis methods are useful. Here we demonstrate that analysis of the minimum bounding ellipsoid of a coordination polyhedron provides a general method for studying distortion, yielding parameters that are sensitive to various orders in metal oxide examples. Ellipsoidal analysis leads to discovery of a general switching of polyhedral distortions at symmetry-disallowed transitions in perovskites that may evidence underlying coordination bistability, and reveals a weak off-centre `d5 effect' for Fe3+ ions that could be exploited in multiferroics. Separating electronic distortions from intrinsic deformations within the low temperature superstructure of magnetite provides new insights into the charge and trimeron orders. Ellipsoidal analysis can be useful for exploring local structure in many materials such as coordination complexes and frameworks, organometallics and organic molecules.

  13. Principal component analysis

    NARCIS (Netherlands)

    Bro, R.; Smilde, A.K.

    2014-01-01

    Principal component analysis is one of the most important and powerful methods in chemometrics as well as in a wealth of other areas. This paper provides a description of how to understand, use, and interpret principal component analysis. The paper focuses on the use of principal component analysis

  14. Multilevel component analysis

    NARCIS (Netherlands)

    Timmerman, M.E.

    2006-01-01

    A general framework for the exploratory component analysis of multilevel data (MLCA) is proposed. In this framework, a separate component model is specified for each group of objects at a certain level. The similarities between the groups of objects at a given level can be expressed by imposing cons

  15. Discriminant Incoherent Component Analysis.

    Science.gov (United States)

    Georgakis, Christos; Panagakis, Yannis; Pantic, Maja

    2016-05-01

    Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions, and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and AU activations. In this paper, the discriminant incoherent component analysis (DICA) is proposed in order to extract low-complexity components, corresponding to facial attributes, which are mutually incoherent among different classes (e.g., identity, expression, and AU activation) from training data, even in the presence of gross sparse errors. To this end, a suitable optimization problem, involving the minimization of nuclear-and l1 -norm, is solved. Having found an ensemble of class-specific incoherent components by the DICA, an unseen (test) image is expressed as a group-sparse linear combination of these components, where the non-zero coefficients reveal the class(es) of the respective facial attribute(s) that it belongs to. The performance of the DICA is experimentally assessed on both synthetic and real-world data. Emphasis is placed on face analysis tasks, namely, joint face and expression recognition, face recognition under varying percentages of training data corruption, subject-independent expression recognition, and AU detection by conducting experiments on four data sets. The proposed method outperforms all the methods that are compared with all the tasks and experimental settings.

  16. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...... semantics, not only in text, but also in dynamic text (chat), images, and combinations of text and images. Here we further expand on the relevance of the ICA model for representing context, including two new analyzes of abstract data: social networks and musical features....

  17. Robust Principal Component Analysis?

    CERN Document Server

    Candes, Emmanuel J; Ma, Yi; Wright, John

    2009-01-01

    This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for th...

  18. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined......This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  19. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization....

  20. Similar component analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong; WANG Xin; LI Junwei; CAO Xianguang

    2006-01-01

    A new unsupervised feature extraction method called similar component analysis (SCA) is proposed in this paper. SCA method has a self-aggregation property that the data objects will move towards each other to form clusters through SCA theoretically,which can reveal the inherent pattern of similarity hidden in the dataset. The inputs of SCA are just the pairwise similarities of the dataset,which makes it easier for time series analysis due to the variable length of the time series. Our experimental results on many problems have verified the effectiveness of SCA on some engineering application.

  1. Recursive principal components analysis.

    Science.gov (United States)

    Voegtlin, Thomas

    2005-10-01

    A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.

  2. Examining the Efficiency of Models Using Tangent Coordinates or Principal Component Scores in Allometry Studies.

    Science.gov (United States)

    Sigirli, Deniz; Ercan, Ilker

    2015-09-01

    Most of the studies in medical and biological sciences are related to the examination of geometrical properties of an organ or organism. Growth and allometry studies are important in the way of investigating the effects of diseases and the environmental factors effects on the structure of the organ or organism. Thus, statistical shape analysis has recently become more important in the medical and biological sciences. Shape is all geometrical information that remains when location, scale and rotational effects are removed from an object. Allometry, which is a relationship between size and shape, plays an important role in the development of statistical shape analysis. The aim of the present study was to compare two different models for allometry which includes tangent coordinates and principal component scores of tangent coordinates as dependent variables in multivariate regression analysis. The results of the simulation study showed that the model constructed by taking tangent coordinates as dependent variables is more appropriate than the model constructed by taking principal component scores of tangent coordinates as dependent variables, for all sample sizes.

  3. Local Component Analysis

    CERN Document Server

    Roux, Nicolas Le

    2011-01-01

    Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density esti...

  4. Analysis of the hominoid os coxae by Cartesian coordinates.

    Science.gov (United States)

    McHenry, H M; Corruccini, R S

    1978-02-01

    This study is based upon 48 3-dimensional coordinates taken on 4 fossil hominid and 127 extant hominoid coxal bones. The follis include Sts 14, SK 3155, MLD 7, and MLD 25. The comparative sample consists of 42 Homo sapiens, 27 Pan troglodytes, 29 Gorilla gorilla and 29 Pongo pygmaeus. The coordinates improve the metrical representation of the bone beyond what can be done with linear measurements because the shape complexity of the os coxae is so great. The coordinates are rotated and translated so that all bones are in a standard position. The coordinates are then standardized for each specimen by dividing all coordinates by the pooled standard deviation of X, Y, and Z coordinates. These data are treated to standard statistical analyses including analysis of variance, Penrose size and shape statistics, principal coordinates and components, and canonical variates analysis. The data are then further altered by using some specimen as a standard and rotating each specimen until the total squared distance between its coordinates and those of the standard are minimized. The same statistics are applied to these "best fit" data. The results show a high degree of agreement between the methods. The hominid os coxae are dundamentally different from the other hominoids and the fossil hominids share the basic hominid configuration but with some unique differences.

  5. Analysis Components Investigation Report

    Science.gov (United States)

    2014-10-01

    value is t ds each te rms presen t, and !()*+) PREVIOUS WRI open sour a training , tagged -1 t or... measure tion and Analys 2 is Component THIS DOCU The mis pro inte cou For sele con The use as solu ran .3 Prot In t be A G and wh and sec Wh info Thi...ASSIFIED December 2 LOSED TO ANY P d to specif c. The valu , the user c keywords p as relevan ument’s me system cou the docum iterion and/ e

  6. Managing Coordinator, Educational or Entrepreneurial Coordinator: Course Coordinator Profile Analysis at Private HEIS

    Directory of Open Access Journals (Sweden)

    Mariana Augusta de Araújo Silva

    2014-12-01

    Full Text Available Higher Education dynamics is impacted by political, economic and financial interference. In parallel, the Ministry of Education and Culture (MEC is strict in its reviews to ensure Brazilian higher education is appreciated and promoted. The purpose of this study is to identify the profile of Course Coordinators and factors that might improve, at the surveyed HEIs, this professional´s relationship with students, teaching staff and Directors. Literature was searched and reviewed so as to collect subject matter pertaining issues. A quantitative research approach was employed and objectives were of exploratory descriptive nature since this technique ensures extended comprehension of the investigated phenomenon, whilst data was gathered via personal interviews. The object of investigation comprised all Course Coordinators of the Estácio/Natal Group in Brazil´s four units. The survey´s tool comprises: 13 closed questions to identify the Coordinator´s profile; 17 questions with a 5 point Likert score scale to identify the entrepreneurial profile; 42 also resorting to a 5 point Likert score scale to measure the dimensions of the Coordinator´s activities and 4 open, optional questions to measure difficulties and possibilities that impact the development of an entrepreneurial course management approach. The study employed both a statistical method (data analysis and descriptive statistics. Findings lead to the conclusion that information and knowledge gathered support researched HEIs in their overcoming of challenges, amongst which encouraging strategic course management and innovation, focused on implementing a new vision of the Course Coordinator, as professionals that master how to balance management and pedagogical skills, whilst innovating by resorting to entrepreneurial competencies.

  7. Shifted Independent Component Analysis

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2007-01-01

    Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carrie...

  8. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....... are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...

  9. A dynamic human motion: coordination analysis.

    Science.gov (United States)

    Pchelkin, Stepan; Shiriaev, Anton S; Freidovich, Leonid B; Mettin, Uwe; Gusev, Sergei V; Kwon, Woong; Paramonov, Leonid

    2015-02-01

    This article is concerned with the generic structure of the motion coordination system resulting from the application of the method of virtual holonomic constraints (VHCs) to the problem of the generation and robust execution of a dynamic humanlike motion by a humanoid robot. The motion coordination developed using VHCs is based on a motion generator equation, which is a scalar nonlinear differential equation of second order. It can be considered equivalent in function to a central pattern generator in living organisms. The relative time evolution of the degrees of freedom of a humanoid robot during a typical motion are specified by a set of coordination functions that uniquely define the overall pattern of the motion. This is comparable to a hypothesis on the existence of motion patterns in biomechanics. A robust control is derived based on a transverse linearization along the configuration manifold defined by the coordination functions. It is shown that the derived coordination and control architecture possesses excellent robustness properties. The analysis is performed on an example of a real human motion recorded in test experiments.

  10. Regularized Generalized Structured Component Analysis

    Science.gov (United States)

    Hwang, Heungsun

    2009-01-01

    Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…

  11. Psychophysiological Mechanisms of Coordination Component of Psychomotor Abilities of the Musicians

    Directory of Open Access Journals (Sweden)

    Korlyakova S.G.

    2017-08-01

    Full Text Available Psychomotor abilities of the musician are implemented in performing technique and include muscle strength, endurance, speed of movements, coordination, motor memory. The article presents the materials of a theoretical study aimed to identify the level character of the coordination component of psychomotor abilities of musicians formation, to define the psychophysiological mechanisms that contribute to the effective development of musical-performing technique. The process of coordination component of psychomotor abilities of musicians formation reviewed in the light of N.. Bernstein theory on construction of movements, which most fully represents the interrelation of physiological and psychological mechanisms of a man motor activity. On the example of musical- performing activity of trained pianists the processes of intermuscular, spatial, sensory-motor (visual-motor, auditory-motor, tactile-motor coordination formation are reviewed and in general – psychomotor coordination processes involved in musicians performing technique development.

  12. Fast Steerable Principal Component Analysis

    OpenAIRE

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L \\times L$ pixels, the computational complexity of our a...

  13. Functional data analysis of joint coordination in the development of vertical jump performance.

    Science.gov (United States)

    Harrison, A J; Ryan, W; Hayes, K

    2007-05-01

    Mastery of complex motor skills requires effective development of inter-segment coordination patterns. These coordination patterns can be described and quantified using various methods, including descriptive angle-angle diagrams, conjugate cross-correlations, vector coding, normalized root mean squared error techniques and, as in this study, functional data analysis procedures. Lower limb kinematic data were obtained for 49 children performing the vertical jump. Participants were assigned to developmental stages using the criteria of Gallahue and Ozmun . Inter-segment joint coordination data consisting of pairs of joint angle-time data were smoothed using B-splines and the resulting bivariate functions were analysed using functional principal component analysis and stepwise discriminant analysis. The results of the analysis showed that the knee-hip joint coordination pattern was most effective at discriminating between developmental stages. The results provide support for the application of functional data analysis techniques in the analysis of joint coordination or time series type data.

  14. Fast Steerable Principal Component Analysis.

    Science.gov (United States)

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-03-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.

  15. Parametric functional principal component analysis.

    Science.gov (United States)

    Sang, Peijun; Wang, Liangliang; Cao, Jiguo

    2017-03-10

    Functional principal component analysis (FPCA) is a popular approach in functional data analysis to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). Most existing FPCA approaches use a set of flexible basis functions such as B-spline basis to represent the FPCs, and control the smoothness of the FPCs by adding roughness penalties. However, the flexible representations pose difficulties for users to understand and interpret the FPCs. In this article, we consider a variety of applications of FPCA and find that, in many situations, the shapes of top FPCs are simple enough to be approximated using simple parametric functions. We propose a parametric approach to estimate the top FPCs to enhance their interpretability for users. Our parametric approach can also circumvent the smoothing parameter selecting process in conventional nonparametric FPCA methods. In addition, our simulation study shows that the proposed parametric FPCA is more robust when outlier curves exist. The parametric FPCA method is demonstrated by analyzing several datasets from a variety of applications. © 2017, The International Biometric Society.

  16. Coordination between veterinary services and other relevant authorities: a key component of good public governance.

    Science.gov (United States)

    Bellemain, V

    2012-08-01

    Coordination between Veterinary Services and other relevant authorities is a key component of good public governance, especially for effective action and optimal management of available resources. The importance of good coordination is reflected in the World Organisation for Animal Health'Tool forthe Evaluation of Performance of Veterinary Services', which includes a critical competency on coordination. Many partners from technical, administrative and legal fields are involved. The degree of formalisation of coordination tends to depend on a country's level of organisation and development. Contingency plans against avian influenza led to breakthroughs in many countries in the mid-2000s. While interpersonal relationships remain vital, not everything should hinge on them. Organisation and management are critical to operational efficiency. The distribution of responsibilities needs to be defined clearly, avoiding duplication and areas of conflict. Lead authorities should be designated according to subject (Veterinary Services in animal health areas) and endowed with the necessary legitimacy. Lead authorities will be responsible for coordinating the drafting and updating of the relevant documents: agreements between authorities, contingency plans, standard operating procedures, etc.

  17. Sensitivity Analysis of Component Reliability

    Institute of Scientific and Technical Information of China (English)

    ZhenhuaGe

    2004-01-01

    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  18. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data.

  19. Anatomic Breast Coordinate System for Mammogram Analysis

    DEFF Research Database (Denmark)

    Karemore, Gopal Raghunath; Brandt, S; Karssemeijer, N;

    2011-01-01

    inside the breast. Most of the risk assessment and CAD modules use a breast region in a image centered Cartesian x,y coordinate system. Nevertheless, anatomical structure follows curve-linear trajectories. We examined an anatomical breast coordinate system that preserves the anatomical correspondence...... between the mammograms and allows extracting not only the aligned position but also the orientation aligned with the anatomy of the breast tissue structure. Materials and Methods The coordinate system used the nipple location as the point A and the border of the pectoral muscle as a line BC. The skin air...... was represented by geodesic distance (s) from nipple and parametric angle (¿) as shown in figure 1. The scoring technique called MTR (mammographic texture resemblance marker) used this breast coordinate system to extract Gaussian derivative features. The features extracted using the (x,y) and the curve...

  20. Appendage modal coordinate truncation criteria in hybrid coordinate dynamic analysis. [for spacecraft attitude control

    Science.gov (United States)

    Likins, P.; Ohkami, Y.; Wong, C.

    1976-01-01

    The paper examines the validity of the assumption that certain appendage-distributed (modal) coordinates can be truncated from a system model without unacceptable degradation of fidelity in hybrid coordinate dynamic analysis for attitude control of spacecraft with flexible appendages. Alternative truncation criteria are proposed and their interrelationships defined. Particular attention is given to truncation criteria based on eigenvalues, eigenvectors, and controllability and observability. No definitive resolution of the problem is advanced, and exhaustive study is required to obtain ultimate truncation criteria.

  1. Nearest-neighbor coordination and chemical ordering in multi-component bulk metallic glasses

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Dong [ORNL; Stoica, Alexandru Dan [ORNL; Yang, Ling [ORNL; Wang, Xun-Li [ORNL; Lu, Zhao Ping [ORNL; Neuefeind, Joerg C [ORNL; Kramer, Matthew J [ORNL; Richardson, James W [Argonne National Laboratory (ANL); Proffen, Thomas E [ORNL

    2007-01-01

    We report complimentary use of high energy x-ray and neutron diffraction to probe the local atomic structure in a Zr-based multi-component bulk metallic glass. By analyzing the partial coordination numbers, we demonstrate the presence of multiple types of solute-centered clusters (or the lack of solute-solute bonding) and efficient packing of the amorphous structure at the atomic scale. Our findings provide a basis for understanding how the local structures change during phase transformation and mechanical deformation.

  2. Nonlinear reaction coordinate analysis in the reweighted path ensemble

    NARCIS (Netherlands)

    Lechner, W.; Rogal, J.; Juraszek, J.; Ensing, B.; Bolhuis, P.G.

    2010-01-01

    We present a flexible nonlinear reaction coordinate analysis method for the transition path ensemble based on the likelihood maximization approach developed by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)] . By parametrizing the reaction coordinate by a string of images in a collective variab

  3. COMPONENTS OF TOTAL ELECTRIC ENERGY LOSSES POWER IN PQR SPATIAL COORDINATES

    Directory of Open Access Journals (Sweden)

    G.G. Zhemerov

    2016-05-01

    Full Text Available Purpose. To obtain relations determining the components of the total losses power with p-q-r power theory for three-phase four-wire energy supply systems, uniquely linking four components: the lowest possible losses power, losses power caused by the reactive power, losses power caused by the instantaneous active power pulsations, losses power caused by current flowing in the neutral wire. Methodology. We have applied concepts of p-q-r power theory, the theory of electrical circuits and mathematical simulation in Matlab package. Results. We have obtained the exact relation, which allows to calculate the total losses power in the three-phase four-wire energy supply system using three components corresponding to the projections of the generalized vectors of voltage and current along the pqr axis coordinates. Originality. For the first time, we have established a mathematical relationship between spatial representation of instantaneous values of the vector components and the total losses power in the three-phase four-wire energy supply systems. Practical value. We have elucidated an issue that using the proposed methodology would create a measuring device for determining the current value of the components of total losses power in three-phase systems. The device operates with measuring information about instantaneous values of currents and voltages.

  4. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  5. Gene set analysis using variance component tests

    Science.gov (United States)

    2013-01-01

    Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107

  6. [A theoretical analysis of coordination in the field of health care: application to coordinated care systems].

    Science.gov (United States)

    Sebai, Jihane

    2016-01-01

    Various organizational, functional or structural issues have led to a review of the foundations of the former health care system based on a traditional market segmentation between general practice and hospital medicine, and between health and social sectors and marked by competition between private and public sectors. The current reconfiguration of the health care system has resulted in “new” levers explained by the development of a new organizational reconfiguration of the primary health care model. Coordinated care structures (SSC) have been developed in this context by making coordination the cornerstone of relations between professionals to ensure global, continuous and quality health care. This article highlights the contributions of various theoretical approaches to the understanding of the concept of coordination in the analysis of the current specificity of health care.

  7. Permutation Tests in Principal Component Analysis.

    Science.gov (United States)

    Pohlmann, John T.; Perkins, Kyle; Brutten, Shelia

    Structural changes in an English as a Second Language (ESL) 30-item reading comprehension test were examined through principal components analysis on a small sample (n=31) of students. Tests were administered on three occasions during intensive ESL training. Principal components analysis of the items was performed for each test occasion.…

  8. Structured Functional Principal Component Analysis

    Science.gov (United States)

    Shou, Haochang; Zipunnikov, Vadim; Crainiceanu, Ciprian M.; Greven, Sonja

    2015-01-01

    Summary Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep. PMID:25327216

  9. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  10. Sensitivity analysis approach to multibody systems described by natural coordinates

    Science.gov (United States)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  11. Generalized Structured Component Analysis with Latent Interactions

    Science.gov (United States)

    Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan

    2010-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…

  12. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  13. Sparse Principal Component Analysis with missing observations

    CERN Document Server

    Lounici, Karim

    2012-01-01

    In this paper, we study the problem of sparse Principal Component Analysis (PCA) in the high-dimensional setting with missing observations. Our goal is to estimate the first principal component when we only have access to partial observations. Existing estimation techniques are usually derived for fully observed data sets and require a prior knowledge of the sparsity of the first principal component in order to achieve good statistical guarantees. Our contributions is threefold. First, we establish the first information-theoretic lower bound for the sparse PCA problem with missing observations. Second, we propose a simple procedure that does not require any prior knowledge on the sparsity of the unknown first principal component or any imputation of the missing observations, adapts to the unknown sparsity of the first principal component and achieves the optimal rate of estimation up to a logarithmic factor. Third, if the covariance matrix of interest admits a sparse first principal component and is in additi...

  14. Principal component analysis of symmetric fuzzy data

    NARCIS (Netherlands)

    Giordani, Paolo; Kiers, Henk A.L.

    2004-01-01

    Principal Component Analysis (PCA) is a well-known tool often used for the exploratory analysis of a numerical data set. Here an extension of classical PCA is proposed, which deals with fuzzy data (in short PCAF), where the elementary datum cannot be recognized exactly by a specific number but by a

  15. Principal Component Analysis in ECG Signal Processing

    Directory of Open Access Journals (Sweden)

    Andreas Bollmann

    2007-01-01

    Full Text Available This paper reviews the current status of principal component analysis in the area of ECG signal processing. The fundamentals of PCA are briefly described and the relationship between PCA and Karhunen-Loève transform is explained. Aspects on PCA related to data with temporal and spatial correlations are considered as adaptive estimation of principal components is. Several ECG applications are reviewed where PCA techniques have been successfully employed, including data compression, ST-T segment analysis for the detection of myocardial ischemia and abnormalities in ventricular repolarization, extraction of atrial fibrillatory waves for detailed characterization of atrial fibrillation, and analysis of body surface potential maps.

  16. Analysis of Geographic Coordinates of the Meteorological Post at Zrinjevac

    Directory of Open Access Journals (Sweden)

    Drago Špoljarić

    2016-06-01

    Full Text Available The Meteorological Post at Zrinjevac built in 1884 is a public meteorological station where many citizens and visitors can obtain information about temperature, humidity and air pressure in the centre of the town. Based on the available documentation, the paper presents the analysis of geographic coordinates of the post, their reliability (accuracy – referring to whether they determine the real position of the post and who and when determined them. There are also the analysed coordinates given that were established by Ivan Stožir in 1884, then the coordinates read by Guro Pila in 1890 from the new special map of Austro-Hungarian Monarchy and converted in 1941 by Nikolaj Abakumov from rectangular coordinates from the cadastral plan, and finally, the coordinates determined by means of modern GNSS measuring systems. There are also the changes of the form and the contents of the post show window described that took place on the occasion of two great restorations in 1959 and 1993 and were done in accordance with its modernisation. The clock with the 24-hour dial is also described. The times of sunrise and sunset in Zagreb have been checked and recalculated.

  17. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  18. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...... classification rates increase if based on multimedia components relative to single media analysis. For this purpose a simple probabilistic supervised classifier which works from unsupervised ICA features is invoked. In addition, we demonstrate the suggested framework for automatic annotation of descriptive key...

  19. Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis

    Directory of Open Access Journals (Sweden)

    Sean eAndrist

    2015-07-01

    Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.

  20. Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)

    Science.gov (United States)

    Candey, Robert M.

    2010-01-01

    The Coordinated Data Analysis Web (CDAWeb) data browsing system provides plotting, listing and open access v ia FTP, HTTP, and web services (REST, SOAP, OPeNDAP) for data from mo st NASA Heliophysics missions and is heavily used by the community. C ombining data from many instruments and missions enables broad resear ch analysis and correlation and coordination with other experiments a nd missions. Crucial to its effectiveness is the use of a standard se lf-describing data format, in this case, the Common Data Format (CDF) , also developed at the Space Physics Data facility , and the use of metadata standa rds (easily edited with SKTeditor ). CDAweb is based on a set of IDL routines, CDAWlib . . The CDF project also maintains soft ware and services for translating between many standard formats (CDF. netCDF, HDF, FITS, XML) .

  1. Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)

    Science.gov (United States)

    Candey, Robert M.

    2010-01-01

    The Coordinated Data Analysis Web (CDAWeb) data browsing system provides plotting, listing and open access v ia FTP, HTTP, and web services (REST, SOAP, OPeNDAP) for data from mo st NASA Heliophysics missions and is heavily used by the community. C ombining data from many instruments and missions enables broad resear ch analysis and correlation and coordination with other experiments a nd missions. Crucial to its effectiveness is the use of a standard se lf-describing data format, in this case, the Common Data Format (CDF) , also developed at the Space Physics Data facility , and the use of metadata standa rds (easily edited with SKTeditor ). CDAweb is based on a set of IDL routines, CDAWlib . . The CDF project also maintains soft ware and services for translating between many standard formats (CDF. netCDF, HDF, FITS, XML) .

  2. Stochastic convex sparse principal component analysis.

    Science.gov (United States)

    Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu

    2016-12-01

    Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.

  3. Principal component analysis implementation in Java

    Science.gov (United States)

    Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz

    2015-09-01

    In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.

  4. Independent multiresolution component analysis and matching pursuit

    NARCIS (Netherlands)

    E. Capobianco (Enrico)

    2001-01-01

    textabstractWe show that decomposing a class of signals with overcomplete dictionaries of functions and combining multiresolution and independent component analysis allow for feature detection in complex non-stationary high frequency time series. Computational learning techniques are then designed

  5. Principal component analysis of phenolic acid spectra

    Science.gov (United States)

    Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...

  6. Advanced Placement: Model Policy Components. Policy Analysis

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Advanced Placement (AP), launched in 1955 by the College Board as a program to offer gifted high school students the opportunity to complete entry-level college coursework, has since expanded to encourage a broader array of students to tackle challenging content. This Education Commission of the State's Policy Analysis identifies key components of…

  7. Boosting Principal Component Analysis by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Divya Somvanshi

    2010-07-01

    Full Text Available This paper presents a new method of feature extraction by combining principal component analysis and genetic algorithm. Use of multiple pre-processors in combination with principal component analysis generates alternate feature spaces for data representation. The present method works out the fusion of these multiple spaces to create higher dimensionality feature vectors. The fused feature vectors are given chromosome representation by taking feature components to be genes. Then these feature vectors are allowed to undergo genetic evolution individually. For genetic algorithm, initial population is created by calculating probability distance matrix, and by applying a probability distance metric such that all the genes which lie farther than a defined threshold are tripped to zero. The genetic evolution of fused feature vector brings out most significant feature components (genes as survivours. A measure of significance is adapted on the basis of frequency of occurrence of the surviving genes in the current population. Finally, the feature vector is obtained by weighting the original feature components in proportion to their significance. The present algorithm is validated in combination with a neural network classifier based on error backpropagation algorithm, and by analysing a number of benchmark datasets available in the open sources.Defence Science Journal, 2010, 60(4, pp.392-398, DOI:http://dx.doi.org/10.14429/dsj.60.495

  8. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  9. Impact of Inter- and Intra-Regional Coordination in Markets With a Large Renewable Component

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Morales González, Juan Miguel; Pinson, Pierre

    2016-01-01

    counterproductive or inefficient under uncertain supply, e.g., from weather-driven renewable power generation. In the absence of a specific target model for the common balancing market in Europe, we introduce a framework to compare different coordination schemes and market organizations. The proposed models......The establishment of the single European day-ahead market has accomplished a crucial step towards the spatial integration of the European power system. However, this new arrangement does not consider any intra-regional coordination of day-ahead and balancing markets and thus may become...... are formulated as stochastic equilibrium problems and compared against an optimal market setup. The simulation results reveal significant efficiency loss in case of partial coordination and diversity of market structure among regional power systems....

  10. Analysis of Coordinated Motions of Humanoid Robot Fingers Using Interphalangeal Joint Coordination

    Directory of Open Access Journals (Sweden)

    Byoung-Ho Kim

    2014-04-01

    Full Text Available In this study, we analyse the coordinated motions of humanoid robot fingers using an interphalangeal joint coordination. For this purpose, four humanoid robot fingers with different sizes have been considered. A biomimetic interphalangeal joint coordination (IJC formulation based on the grasp configuration of human fingers has been presented for humanoid robot fingers. The usefulness of the specified IJC formulation for human-like finger motion has been verified through comparative demonstrations. As a result, a proper coordination of humanoid robot fingertips can be achieved by applying our IJC formulation. Also the IJC formulation can be used to design of humanoid robot fingers.

  11. Principal components analysis of Jupiter VIMS spectra

    Science.gov (United States)

    Bellucci, G.; Formisano, V.; D'Aversa, E.; Brown, R.H.; Baines, K.H.; Bibring, J.-P.; Buratti, B.J.; Capaccioni, F.; Cerroni, P.; Clark, R.N.; Coradini, A.; Cruikshank, D.P.; Drossart, P.; Jaumann, R.; Langevin, Y.; Matson, D.L.; McCord, T.B.; Mennella, V.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, Christophe; Chamberlain, M.C.; Hansen, G.; Hibbits, K.; Showalter, M.; Filacchione, G.

    2004-01-01

    During Cassini - Jupiter flyby occurred in December 2000, Visual-Infrared mapping spectrometer (VIMS) instrument took several image cubes of Jupiter at different phase angles and distances. We have analysed the spectral images acquired by the VIMS visual channel by means of a principal component analysis technique (PCA). The original data set consists of 96 spectral images in the 0.35-1.05 ??m wavelength range. The product of the analysis are new PC bands, which contain all the spectral variance of the original data. These new components have been used to produce a map of Jupiter made of seven coherent spectral classes. The map confirms previously published work done on the Great Red Spot by using NIMS data. Some other new findings, presently under investigation, are presented. ?? 2004 Published by Elsevier Ltd on behalf of COSPAR.

  12. Principal component analysis for authorship attribution

    Directory of Open Access Journals (Sweden)

    Amir Jamak

    2012-01-01

    Full Text Available Background: To recognize the authors of the texts by the use of statistical tools, one first needs to decide about the features to be used as author characteristics, and then extract these features from texts. The features extracted from texts are mostly the counts of so called function words. Objectives: The data extracted are processed further to compress as a data with less number of features, such a way that the compressed data still has the power of effective discriminators. In this case feature space has less dimensionality then the text itself. Methods/Approach: In this paper, the data collected by counting words and characters in around a thousand paragraphs of each sample book, underwent a principal component analysis performed using neural networks. Once the analysis was complete, the first of the principal components is used to distinguish the books authored by a certain author. Results: The achieved results show that every author leaves a unique signature in written text that can be discovered by analyzing counts of short words per paragraph. Conclusions: In this article we have demonstrated that based on analyzing counts of short words per paragraph authorship could be traced using principal component analysis. Methodology could be used for other purposes, like fraud detection in auditing.

  13. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  14. Accelerated FEM Analysis for Critical Engine Components

    Directory of Open Access Journals (Sweden)

    Leonardo FRIZZIERO

    2014-10-01

    Full Text Available This paper introduces a method to simplify a nonlinear problem in order to use linear finite element analysis. This approach improves calculation time by 2 orders of magnitude. It is then possible to optimize the geometry of the components even without supercomputers. In this paper the method is applied to a very critical component: the aluminium alloy piston of a modern common rail diesel engine. The method consists in the subdivision of the component, in this case the piston, in several volumes, that have approximately a constant temperature. These volumes are then assembled through congruence constraints. To each volume a proper material is then assigned. It is assumed that material behaviour depends on average temperature, load magnitude and load gradient. This assumption is valid since temperatures vary slowly when compared to pressure (load. In fact pressures propagate with the speed of sound. The method is validated by direct comparison with nonlinear simulation of the same component, the piston, taken as an example. In general, experimental tests have confirmed the cost-effectiveness of this approach.

  15. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  16. Real-Time Principal-Component Analysis

    Science.gov (United States)

    Duong, Vu; Duong, Tuan

    2005-01-01

    A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.

  17. Scaling in ANOVA-simultaneous component analysis.

    Science.gov (United States)

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  18. Practical Issues in Component Aging Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dana L. Kelly; Andrei Rodionov; Jens Uwe-Klugel

    2008-09-01

    This paper examines practical issues in the statistical analysis of component aging data. These issues center on the stochastic process chosen to model component failures. The two stochastic processes examined are repair same as new, leading to a renewal process, and repair same as old, leading to a nonhomogeneous Poisson process. Under the first assumption, times between failures can treated as statistically independent observations from a stationary process. The common distribution of the times between failures is called the renewal distribution. Under the second process, the times between failures will not be independently and identically distributed, and one cannot simply fit a renewal distribution to the cumulative failure times or the times between failures. The paper illustrates how the assumption made regarding the repair process is crucial to the analysis. Besides the choice of stochastic process, other issues that are discussed include qualitative graphical analysis and simple nonparametric hypothesis tests to help judge which process appears more appropriate. Numerical examples are presented to illustrate the issues discussed in the paper.

  19. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai; Kolenda, Thomas

    2003-01-01

    Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...... largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling...... combined text/image data for the purpose of cross-media retrieval....

  20. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...... largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling...... combined text/image data for the purpose of cross-media retrieval....

  1. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  2. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  3. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware... Analysis Framework 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-15-1-4068 5c.  PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) Keiji Takeda 5d

  4. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  5. Independent Component Analysis Over Galois Fields

    CERN Document Server

    Yeredor, Arie

    2010-01-01

    We consider the framework of Independent Component Analysis (ICA) for the case where the independent sources and their linear mixtures all reside in a Galois field of prime order P. Similarities and differences from the classical ICA framework (over the Real field) are explored. We show that a necessary and sufficient identifiability condition is that none of the sources should have a Uniform distribution. We also show that pairwise independence of the mixtures implies their full mutual independence (namely a non-mixing condition) in the binary (P=2) and ternary (P=3) cases, but not necessarily in higher order (P>3) cases. We propose two different iterative separation (or identification) algorithms: One is based on sequential identification of the smallest-entropy linear combinations of the mixtures, and is shown to be equivariant with respect to the mixing matrix; The other is based on sequential minimization of the pairwise mutual information measures. We provide some basic performance analysis for the bina...

  6. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  7. Analysis of coordination between breathing and walking rhythms in humans.

    Science.gov (United States)

    Rassler, B; Kohl, J

    1996-12-01

    We investigated the coordination between breathing and walking in humans to elucidate whether the coordination degree depends more on metabolic load or on breathing or stride frequencies and whether coordination causes energetic economization expressed by reduction of oxygen uptake (VO2). Eighteen healthy volunteers walked on a treadmill at three load levels realized by different velocities and slopes. We analyzed the time intervals between step onset and the onset of inspiration or expiration related to stride duration (relative phase, phi) and computed the relative-phase histogram to assess the degree of coordination. The degree of coordination between breathing and stepping enhanced with increasing walking speed. Increased work load achieved by slope at constant walking speed improved coordination only slightly. No significant VO2 reduction due to coordination was found. VO2 was more strongly related to ventilation variations occurring during coordination. Also the sympathetic tone reflected by the spectral power of heart rate variability was not reduced during coordination. We conclude that during walking the coordination degree increases with increasing stride frequency and that coordination does not necessarily cause energetic economization.

  8. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  9. Coordinate regulation of the mother centriole component nlp by nek2 and plk1 protein kinases.

    Science.gov (United States)

    Rapley, Joseph; Baxter, Joanne E; Blot, Joelle; Wattam, Samantha L; Casenghi, Martina; Meraldi, Patrick; Nigg, Erich A; Fry, Andrew M

    2005-02-01

    Mitotic entry requires a major reorganization of the microtubule cytoskeleton. Nlp, a centrosomal protein that binds gamma-tubulin, is a G(2)/M target of the Plk1 protein kinase. Here, we show that human Nlp and its Xenopus homologue, X-Nlp, are also phosphorylated by the cell cycle-regulated Nek2 kinase. X-Nlp is a 213-kDa mother centriole-specific protein, implicating it in microtubule anchoring. Although constant in abundance throughout the cell cycle, it is displaced from centrosomes upon mitotic entry. Overexpression of active Nek2 or Plk1 causes premature displacement of Nlp from interphase centrosomes. Active Nek2 is also capable of phosphorylating and displacing a mutant form of Nlp that lacks Plk1 phosphorylation sites. Importantly, kinase-inactive Nek2 interferes with Plk1-induced displacement of Nlp from interphase centrosomes and displacement of endogenous Nlp from mitotic spindle poles, while active Nek2 stimulates Plk1 phosphorylation of Nlp in vitro. Unlike Plk1, Nek2 does not prevent association of Nlp with gamma-tubulin. Together, these results provide the first example of a protein involved in microtubule organization that is coordinately regulated at the G(2)/M transition by two centrosomal kinases. We also propose that phosphorylation by Nek2 may prime Nlp for phosphorylation by Plk1.

  10. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  11. Face Recognition Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ali Javed

    2013-02-01

    Full Text Available The purpose of the proposed research work is to develop a computer system that can recognize a person by comparing the characteristics of face to those of known individuals. The main focus is on frontal two dimensional images that are taken in a controlled environment i.e. the illumination and the background will be constant. All the other methods of person’s identification and verification like iris scan or finger print scan require high quality and costly equipment’s but in face recognition we only require a normal camera giving us a 2-D frontal image of the person that will be used for the process of the person’s recognition. Principal Component Analysis technique has been used in the proposed system of face recognition. The purpose is to compare the results of the technique under the different conditions and to find the most efficient approach for developing a facial recognition system

  12. Principal Components Analysis In Medical Imaging

    Science.gov (United States)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  13. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  14. Principal components analysis of population admixture.

    Directory of Open Access Journals (Sweden)

    Jianzhong Ma

    Full Text Available With the availability of high-density genotype information, principal components analysis (PCA is now routinely used to detect and quantify the genetic structure of populations in both population genetics and genetic epidemiology. An important issue is how to make appropriate and correct inferences about population relationships from the results of PCA, especially when admixed individuals are included in the analysis. We extend our recently developed theoretical formulation of PCA to allow for admixed populations. Because the sampled individuals are treated as features, our generalized formulation of PCA directly relates the pattern of the scatter plot of the top eigenvectors to the admixture proportions and parameters reflecting the population relationships, and thus can provide valuable guidance on how to properly interpret the results of PCA in practice. Using our formulation, we theoretically justify the diagnostic of two-way admixture. More importantly, our theoretical investigations based on the proposed formulation yield a diagnostic of multi-way admixture. For instance, we found that admixed individuals with three parental populations are distributed inside the triangle formed by their parental populations and divide the triangle into three smaller triangles whose areas have the same proportions in the big triangle as the corresponding admixture proportions. We tested and illustrated these findings using simulated data and data from HapMap III and the Human Genome Diversity Project.

  15. Construction of a 21-Component Layered Mixture Experiment Design Using a New Mixture Coordinate-Exchange Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Cooley, Scott K.; Jones, Bradley

    2005-11-01

    This paper describes the solution to a unique and challenging mixture experiment design problem involving: 1) 19 and 21 components for two different parts of the design, 2) many single-component and multi-component constraints, 3) augmentation of existing data, 4) a layered design developed in stages, and 5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. The statistical objective was to develop an experimental design by augmenting existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The existing 144 glasses were expressed as 19-component nonradioactive compositions and then augmented with 40 new nonradioactive glasses. These included 8 glasses on the outer layer of the region, 27 glasses on an inner layer, 2 replicate glasses at the centroid, and one replicate each of three existing glasses. Then, the 144 + 40 = 184 glasses were expressed as 21-component radioactive compositions, and augmented with 5 radioactive glasses. A D-optimal design algorithm was used to select the new outer layer, inner layer, and radioactive glasses. Several statistical software packages can generate D-optimal experimental designs, but nearly all of them require a set of candidate points (e.g., vertices) from which to select design points. The large number of components (19 or 21) and many constraints made it impossible to generate the huge number of vertices and other typical candidate points. JMP was used to select design points without candidate points. JMP uses a coordinate-exchange algorithm modified for mixture experiments, which is discussed and illustrated in the paper.

  16. Facilitation of the PED analysis of large molecules by using global coordinates.

    Science.gov (United States)

    Jamróz, Michał H; Ostrowski, Sławomir; Dobrowolski, Jan Cz

    2015-10-01

    Global coordinates have been found to be useful in the potential energy distribution (PED) analyses of the following large molecules: [13]-acene and [33]-helicene. The global coordinate is defined based on much distanced fragments of the analysed molecule, whereas so far, the coordinates used in the analysis were based on stretchings, bendings, or torsions of the adjacent atoms. It has been shown that the PED analyses performed using the global coordinate and the classical ones can lead to exactly the same PED contributions. The global coordinates may significantly improve the facility of the analysis of the vibrational spectra of large molecules.

  17. Value-driven risk analysis of coordination models

    NARCIS (Netherlands)

    Ionita, Dan; Gordijn, Jaap; Yesuf, Ahmed Seid; Wieringa, Roel

    2016-01-01

    Coordination processes are business processes that involve independent profit-and-loss responsible business actors who collectively provide something of value to a customer. Coordination processes are meant to be profitable for the business actors that execute them. However, because business actors

  18. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    Science.gov (United States)

    Benson, Elizabeth; Cowley, Matthew S.; Harvill. Lauren; Rajulu, Sudhakar

    2014-01-01

    When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. One of our key functions is to help design engineers understand how a human will perform with new designs and all too often traditional use of Euler rotations becomes as much of a hindrance as a help. It is believed that using a spherical coordinate system will allow ABF personnel to more quickly and easily transmit important mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project is to establish new analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify the method before it was implemented in the ABF's data analysis practices. The first stage was a proof of concept, where a mechanical test rig was built and instrumented with an inclinometer, so that its angle from horizontal was known. The test rig was tracked in 3D using an optical motion capture system, and its position and orientation were reported in both Euler and spherical reference systems. The rig was meant to simulate flexion/extension, transverse rotation and abduction/adduction of the human shoulder, but without the variability inherent in human motion. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would

  19. Motion Intention Analysis-Based Coordinated Control for Amputee-Prosthesis Interaction

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2010-01-01

    Full Text Available To study amputee-prosthesis (AP interaction, a novel reconfigurable biped robot was designed and fabricated. In homogeneous configuration, two identical artificial legs (ALs were used to simulate the symmetrical lower limbs of a healthy person. Linear inverted pendulum model combining with ZMP stability criterion was used to generate the gait trajectories of ALs. To acquire interjoint coordination for healthy gait, rate gyroscopes were mounted on CoGs of thigh and shank of both legs. By employing principal component analysis, the measured angular velocities were processed and the motion synergy was obtained in the final. Then, one of two ALs was replaced by a bionic leg (BL, and the biped robot was changed into heterogeneous configuration to simulate the AP coupling system. To realize symmetrical stable walking, master/slave coordinated control strategy is proposed. According to information acquired by gyroscopes, BL recognized the motion intention of AL and reconstructed its kinematic variables based on interjoint coordination. By employing iterative learning control, gait tracking of BL to AL was archived. Real environment robot walking experiments validated the correctness and effectiveness of the proposed scheme.

  20. Study of engine noise based on independent component analysis

    Institute of Scientific and Technical Information of China (English)

    HAO Zhi-yong; JIN Yan; YANG Chen

    2007-01-01

    Independent component analysis was applied to analyze the acoustic signals from diesel engine. First the basic principle of independent component analysis (ICA) was reviewed. Diesel engine acoustic signal was decomposed into several independent components (Ics); Fourier transform and continuous wavelet transform (CWT) were applied to analyze the independent components. Different noise sources of the diesel engine were separated, based on the characteristics of different component in time-frequency domain.

  1. THEORETICAL ANALYSIS OF COORDINATES MEASUREMENT BY FLEXIBLE 3D MEASURING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    ZHANG Guoyu; SUN Tianxiang; WANG Lingyun; XU Xiping

    2007-01-01

    The system mathematical model of flexible 3D measuring system is built by theoretical analysis, and the theoretical formula for measuring space point coordinate is also derived.Frog-jumping based coordinate transform method is put forward in order to solve measuring problem for large size parts. The flog-jumping method is discussed, and the coordinate transform mathematical model is method of the space point coordinate compared to original value, and an advanced method is provided. Form the space point coordinate transform formula can derive the calculation measuring method for measuring large size parts.

  2. A radiographic analysis of implant component misfit.

    LENUS (Irish Health Repository)

    Sharkey, Seamus

    2011-07-01

    Radiographs are commonly used to assess the fit of implant components, but there is no clear agreement on the amount of misfit that can be detected by this method. This study investigated the effect of gap size and the relative angle at which a radiograph was taken on the detection of component misfit. Different types of implant connections (internal or external) and radiographic modalities (film or digital) were assessed.

  3. Analysis of complications after blood components' transfusions.

    Science.gov (United States)

    Timler, Dariusz; Klepaczka, Jadwiga; Kasielska-Trojan, Anna; Bogusiak, Katarzyna

    2015-04-01

    Complications after blood components still constitute an important clinical problem and serve as limitation of liberal-transfusion strategy. The aim of the study was to present the 5-year incidence of early blood transfusions complications and to assess their relation to the type of the transfused blood components. 58,505 transfusions of blood components performed in the years 2006-2010 were retrospectively analyzed. Data concerning the amount of the transfused blood components and the numbers of adverse transfusion reactions reported to the Regional Blood Donation and Treatment Center (RBDTC) was collected. 95 adverse transfusion reactions were reportedto RBDTC 0.16% of alldonations (95/58 505) - 58 after PRBC transfusions, 28 after platelet concentrate transfusions and 9 after FFP transfusion. Febrile nonhemolytic and allergic reactions constitute respectively 36.8% and 30.5% of all complications. Nonhemolyticand allergic reactions are the most common complications of blood components transfusion and they are more common after platelet concentrate transfusions in comparison to PRBC and FFP donations.

  4. Spectral Components Analysis of Diffuse Emission Processes

    Energy Technology Data Exchange (ETDEWEB)

    Malyshev, Dmitry; /KIPAC, Menlo Park

    2012-09-14

    We develop a novel method to separate the components of a diffuse emission process based on an association with the energy spectra. Most of the existing methods use some information about the spatial distribution of components, e.g., closeness to an external template, independence of components etc., in order to separate them. In this paper we propose a method where one puts conditions on the spectra only. The advantages of our method are: 1) it is internal: the maps of the components are constructed as combinations of data in different energy bins, 2) the components may be correlated among each other, 3) the method is semi-blind: in many cases, it is sufficient to assume a functional form of the spectra and determine the parameters from a maximization of a likelihood function. As an example, we derive the CMB map and the foreground maps for seven yeas of WMAP data. In an Appendix, we present a generalization of the method, where one can also add a number of external templates.

  5. Mapping ash properties using principal components analysis

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  6. Major component analysis of dynamic networks of physiologic organ interactions

    Science.gov (United States)

    Liu, Kang K. L.; Bartsch, Ronny P.; Ma, Qianli D. Y.; Ivanov, Plamen Ch

    2015-09-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function.

  7. Fep1d: a script for the analysis of reaction coordinates.

    Science.gov (United States)

    Banushkina, Polina V; Krivov, Sergei V

    2015-05-05

    The dynamics of complex systems with many degrees of freedom can be analyzed by projecting it onto one or few coordinates (collective variables). The dynamics is often described then as diffusion on a free energy landscape associated with the coordinates. Fep1d is a script for the analysis of such one-dimensional coordinates. The script allows one to construct conventional and cut-based free energy profiles, to assess the optimality of a reaction coordinate, to inspect whether the dynamics projected on the coordinate is diffusive, to transform (rescale) the reaction coordinate to more convenient ones, and to compute such quantities as the mean first passage time, the transition path times, the coordinate dependent diffusion coefficient, and so forth. Here, we describe the implemented functionality together with the underlying theoretical framework.

  8. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  9. A Component Analysis of Marriage Enrichment.

    Science.gov (United States)

    Buston, Beverley G.; And Others

    Although marriage enrichment programs have been shown to be effective for many couples, a multidimensional approach to assessment is needed in investigating these groups. The components of information and social support in successful marriage enrichment programs were compared in a completely crossed 2 x 2 factorial design with repeated measures.…

  10. Independent Component Analysis in a convoluted world

    DEFF Research Database (Denmark)

    Dyrholm, Mads

    2006-01-01

    instantaneousICA, then select a physiologically interesting subspace, then remove the delayed temporal dependencies among the instantaneous ICA components by using convolutive ICA. By Bayesian model selection, in a real world EEG data set, it is shown that convolutive ICA is a better model for EEG than...

  11. Coordination field analysis of rare earth complexes with triangular symmetry

    Institute of Scientific and Technical Information of China (English)

    范英芳; 潘大丰; 杨频

    1997-01-01

    The calculation of the complex matrixes in odd triangular symmetry was accomplished.The configurations of the coordination unit with various triangular symmetries and different ligand numbers were discussed.On the basis of the double-sphere coordination point-charge (DSCPCF) model,the detailed forms of the DSCPCF parameters Bmk and the expressions of the perturbation matrix elements in triangular field (D3,D3h,D3d) were derived.Thereby,the calculation scheme of coordination field perturbation energy of the rare earth complexes with triangular symmetry was constructed After the calculation scheme was programmed,the Stark energies of the crystalline TbAl3(BO3)4 were calculated The results were considerably close to the experimental values

  12. An anatomically oriented breast coordinate system for mammogram analysis

    DEFF Research Database (Denmark)

    Brandt, Sami; Karemore, Gopal; Karssemeijer, Nico

    2011-01-01

    and orientations are registered and extracted without non-linearly deforming the images. We use the proposed breast coordinate transform in a cross-sectional breast cancer risk assessment study of 490 women, in which we attempt to learn breast cancer risk factors from mammograms that were taken prior to when...... between the mammograms of each woman and among the mammograms of all of the women in the study. The results of the cross-sectional study show that the classification into cancer and control groups can be improved by using the new coordinate system, compared to other systems evaluated. Comparisons were...

  13. An in-depth analysis of theoretical frameworks for the study of care coordination

    Directory of Open Access Journals (Sweden)

    Sabine Van Houdt

    2013-06-01

    Full Text Available Introduction: Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frameworks and clarify key concepts related to care coordination. Methods: We performed a literature review to update existing theoretical frameworks. An in-depth analysis of these theoretical frameworks was conducted to formulate key concepts related to care coordination.Results: Our literature review found seven previously unidentified theoretical frameworks for studying care coordination. The in-depth analysis identified fourteen key concepts that the theoretical frameworks addressed. These were ‘external factors’, ‘structure’, ‘tasks characteristics’, ‘cultural factors’, ‘knowledge and technology’, ‘need for coordination’, ‘administrative operational processes’, ‘exchange of information’, ‘goals’, ‘roles’, ‘quality of relationship’, ‘patient outcome’, ‘team outcome’, and ‘(interorganizational outcome’.Conclusion: These 14 interrelated key concepts provide a base to develop or choose a framework for studying care coordination. The relational coordination theory and the multi-level framework are interesting as these are the most comprehensive.

  14. An in-depth analysis of theoretical frameworks for the study of care coordination

    Directory of Open Access Journals (Sweden)

    Sabine Van Houdt

    2013-06-01

    Full Text Available Introduction: Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frameworks and clarify key concepts related to care coordination. Methods: We performed a literature review to update existing theoretical frameworks. An in-depth analysis of these theoretical frameworks was conducted to formulate key concepts related to care coordination. Results: Our literature review found seven previously unidentified theoretical frameworks for studying care coordination. The in-depth analysis identified fourteen key concepts that the theoretical frameworks addressed. These were ‘external factors’, ‘structure’, ‘tasks characteristics’, ‘cultural factors’, ‘knowledge and technology’, ‘need for coordination’, ‘administrative operational processes’, ‘exchange of information’, ‘goals’, ‘roles’, ‘quality of relationship’, ‘patient outcome’, ‘team outcome’, and ‘(interorganizational outcome’. Conclusion: These 14 interrelated key concepts provide a base to develop or choose a framework for studying care coordination. The relational coordination theory and the multi-level framework are interesting as these are the most comprehensive.

  15. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    Energy Technology Data Exchange (ETDEWEB)

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  16. Vibrational spectra and normal coordinate analysis on structure of chlorambucil and thioguanine

    Indian Academy of Sciences (India)

    S Gunasekaran; S Kumaresan; R Arun Balaji; G Anand; S Seshadri

    2008-12-01

    A normal coordinate analysis on chlorambucil and thioguanine has been carried out with a set of symmetry coordinates following Wilson's – matrix method. The potential constants evaluated for these molecules are found to be in good agreement with literature values thereby confirming the vibrational assignments. To check whether the chosen set of vibrational frequencies contribute maximum to the potential energy associated with the normal coordinates of the molecule, the potential energy distribution has been evaluated.

  17. Generator Coordinate Method Analysis of Xe and Ba Isotopes

    Science.gov (United States)

    Higashiyama, Koji; Yoshinaga, Naotaka; Teruya, Eri

    Nuclear structure of Xe and Ba isotopes is studied in terms of the quantum-number projected generator coordinate method (GCM). The GCM reproduces well the energy levels of high-spin states as well as low-lying states. The structure of the low-lying states is analyzed through the GCM wave functions.

  18. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  19. FUZZY PRINCIPAL COMPONENT ANALYSIS AND ITS KERNEL BASED MODEL

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Principal Component Analysis (PCA) is one of the most important feature extraction methods, and Kernel Principal Component Analysis (KPCA) is a nonlinear extension of PCA based on kernel methods. In real world, each input data may not be fully assigned to one class and it may partially belong to other classes. Based on the theory of fuzzy sets, this paper presents Fuzzy Principal Component Analysis (FPCA) and its nonlinear extension model, i.e., Kernel-based Fuzzy Principal Component Analysis (KFPCA). The experimental results indicate that the proposed algorithms have good performances.

  20. Independent component analysis based on adaptive artificial bee colony

    National Research Council Canada - National Science Library

    Shi Zhang; Chao-Wei Bao; Hai-Bin Shen

    2016-01-01

    .... An independent component analysis method based on adaptive artificial bee colony algorithm is proposed in this paper, aiming at the problems of slow convergence and low computational precision...

  1. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  2. Columbia River Component Data Gap Analysis

    Energy Technology Data Exchange (ETDEWEB)

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  3. Penta-coordinated phosphorus structure analysis on kinases

    Institute of Scientific and Technical Information of China (English)

    LI Wu; MA Yuan; ZHAO Yufen

    2004-01-01

    In this paper, based on the known crystal structures of Square Pyramid (SP) and Trigonal Bipyramid (TBP) penta-coordinated phosphorus compounds containing amino acid side chains, such as amino, carboxyl, hydroxyl or thiol, a software designed to survey the P(5)-Structure of phosphorylated proteins. By this software, it was found that 382 of 398 phosphorus-related kinases (96%) from current PDB could go through the penta-coordinated phosphorus transition state or intermediate. For example, in protein 1HE8, amino group in Lysine16was within 3.58 A from the phosphorus atom, and a potential TBP structure consisting GNP2 and Nz could be overlapped with an authentic TBP with an RMSD value of 0.71 A.

  4. PRINCIPAL COMPONENT ANALYSIS - A POWERFUL TOOL IN COMPUTING MARKETING INFORMATION

    National Research Council Canada - National Science Library

    Cristinel Constantin

    2014-01-01

    ... that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA...

  5. Missing values in multi-level simultaneous component analysis

    NARCIS (Netherlands)

    Josse, Julie; Timmerman, Marieke E.; Kiers, Henk A. L.

    2013-01-01

    Component analysis of data with missing values is often performed with algorithms of iterative imputation. However, this approach is prone to overfitting problems. As an alternative, Josse et al. (2009) proposed a regularized algorithm in the framework of Principal Component Analysis (PCA). Here we

  6. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  7. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  8. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  9. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  10. Care coordination at a pediatric accountable care organization (ACO): A qualitative analysis.

    Science.gov (United States)

    Hafeez, Baria; Miller, Sophia; Patel, Anup D; Grinspan, Zachary M

    2017-08-01

    Care coordinators may help manage care for children with chronic illness. Their role in pediatric epilepsy care is understudied. We aimed to qualitatively describe the content of a care coordination intervention for children with epilepsy. We conducted nine semi-structured interviews and one focus group with care coordinators at a pediatric accountable care organization (ACO) in Ohio. The care coordinators used a modified version of a published care coordination checklist for children with epilepsy (Patel AD, 2014). We analyzed transcripts using thematic analysis. We focused on (1) the content of the intervention; and (2) perceptions of facilitators and barriers to improve outcomes, with an emphasis on epilepsy specific facilitators and barriers. Care coordinators interacted with children and families in multiple contexts (phone calls, physician visits, home visits), and included relationship building (developing rapport and trust between families and the health system), communication (transmission of information between the child, family, physician, and other care providers), and service (help with housing, transportation, scheduling, liaison with community resources, etc.). Facilitators and barriers of care coordination included factors related to parents, physicians, health system, payers, and community. Epilepsy-specific barriers included stigma (felt & enacted) and the anxiety associated with clinical uncertainty. Epilepsy related facilitators included a seizure action plan, written educational materials, and an epilepsy specific care coordination checklist. In addition to facilitators and barriers common to many care coordination programs, pediatric epilepsy care coordinators should be particularly aware of epilepsy stigma and clinical uncertainty. A care coordination checklist and epilepsy focused educational materials written to accommodate people with low health literacy may provide additional benefit. Further research is required to understand the effect

  11. Developmental Coordination Disorder: Validation of a Qualitative Analysis Using Statistical Factor Analysis

    Directory of Open Access Journals (Sweden)

    Kathy Ahern

    2002-09-01

    Full Text Available This study investigates triangulation of the findings of a qualitative analysis by applying an exploratory factor analysis to themes identified in a phenomenological study. A questionnaire was developed from a phenomenological analysis of parents' experiences of parenting a child with Developmental Coordination Disorder (DCD. The questionnaire was administered to 114 parents of DCD children and data were analyzed using an exploratory factor analysis. The extracted factors provided support for the validity of the original qualitative analysis, and a commentary on the validity of the process is provided. The emerging description is of the compromises that were necessary to translate qualitative themes into statistical factors, and of the ways in which the statistical analysis suggests further qualitative study.

  12. Let-7 coordinately suppresses components of the amino acid sensing pathway to repress mTORC1 and induce autophagy.

    Science.gov (United States)

    Dubinsky, Amy N; Dastidar, Somasish Ghosh; Hsu, Cynthia L; Zahra, Rabaab; Djakovic, Stevan N; Duarte, Sonia; Esau, Christine C; Spencer, Brian; Ashe, Travis D; Fischer, Kimberlee M; MacKenna, Deidre A; Sopher, Bryce L; Masliah, Eliezer; Gaasterland, Terry; Chau, B Nelson; Pereira de Almeida, Luis; Morrison, Bradley E; La Spada, Albert R

    2014-10-07

    Macroautophagy (hereafter autophagy) is the major pathway by which macromolecules and organelles are degraded. Autophagy is regulated by the mTOR signaling pathway-the focal point for integration of metabolic information, with mTORC1 playing a central role in balancing biosynthesis and catabolism. Of the various inputs to mTORC1, the amino acid sensing pathway is among the most potent. Based upon transcriptome analysis of neurons subjected to nutrient deprivation, we identified let-7 microRNA as capable of promoting neuronal autophagy. We found that let-7 activates autophagy by coordinately downregulating the amino acid sensing pathway to prevent mTORC1 activation. Let-7 induced autophagy in the brain to eliminate protein aggregates, establishing its physiological relevance for in vivo autophagy modulation. Moreover, peripheral delivery of let-7 anti-miR repressed autophagy in muscle and white fat, suggesting that let-7 autophagy regulation extends beyond CNS. Hence, let-7 plays a central role in nutrient homeostasis and proteostasis regulation in higher organisms. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. An automated system for quantitative analysis of newborns' oral-motor behavior and coordination during bottle feeding.

    Science.gov (United States)

    Tamilia, Eleonora; Formica, Domenico; Visco, Anna Maria; Scaini, Alberto; Taffoni, Fabrizio

    2015-01-01

    In this work a novel unobtrusive technology-aided system is presented and tested for the assessment of newborns' oral-motor behavior and coordination during bottle feeding. A low-cost monitoring device was designed and developed in order to record Suction (S) and Expression (E) pressures from a typical feeding bottle. A software system was developed to automatically treat the data and analyze them. A set of measures of motor control and coordination has been implemented for the specific application to the analysis of sucking behavior. Experimental data were collected with the developed system on two groups of newborns (Healthy vs. Low Birth Weight) in a clinical setting. We identified the most sensitive S features to group differences, and analyzed their correlation with S/E coordination measures. Then, Principal Component Analysis (PCA) was used to explore the system suitability to automatically identify peculiar oral behaviors. Results suggest the suitability of the proposed system to perform an objective technology-aided assessment of the newborn's oral-motor behavior and coordination during the first days of life.

  14. A polythreaded Ag(I) coordination polymer: A rare three-dimensional Pseudo-polyrotaxana constructed from the same components

    Energy Technology Data Exchange (ETDEWEB)

    Im, Han Su; Lee, Eunji; Lee, Shim Sung; Kim, Tae Ho; Park, Ki Min [Research Institute of Natural Science and Dept. of Chemistry, Gyeongsang National University, Jinju (Korea, Republic of); Moon, Suk Hee [Dept. of Food and Nutrition, Kyungnam College of Information and Technology, Busan (Korea, Republic of)

    2017-01-15

    In supramolecular chemistry, a lot of mechanically poly-threaded coordination polymers, such as polyrotaxanes, based on self-assembly of organic ligands and transition metal ions have attracted great attention over the past two decades because of their fascinating architectures as well as their potential application in material science. Among them, 1D + 2D → 3D pseudo-polyrotaxane constructed by the penetration of 1D coordination polymer chains into 1D channels formed by parallel stacking of 2D porous coordination layers is a quite rare topology. Until now, only a few examples of 1D + 2D → 3D pseudo-polyrotaxanes have been reported.

  15. Principal component analysis using neural network

    Institute of Scientific and Technical Information of China (English)

    杨建刚; 孙斌强

    2002-01-01

    The authors present their analysis of the differential equation dX ( t )/dt = AX ( t ) - XT( t ) BX( t)X( t), where A is an unsymmetrical real matrix, B is a positive definite symmetric real matrix,X E Rn ; showing that the equation characterizes a class of continuous type full-feedback artificial neural network; We give the analytic expression of the solution; discuss its asymptotic behavior; and finally present the result showing that, in almost all cases, one and only one of following eases is true. 1. For any initial value X0∈Rn, the solution approximates asymptotically to zero vector. In thin cane, the real part of each eigenvalue of A is non-positive. 2. For any initial value X0 outside a proper subspace of Rn,the solution approximates asymptoticaUy to a nontrivial constant vector Y( X0 ). In this cane, the eigenvalue of A with maximal real part is the positive number λ=Ⅱ Y (X0)ⅡB2 and Y (X0) is the corre-sponding eigenvector. 3. For any initial value X0 outsidea proper subspace of Rn, the solution approximates asymptotically to a non-constant periodic function Y( X0 , t ). Then the eigenvalues of A with maximal real part is a pair of conjugate complex numbers which can be computed.

  16. Principal component analysis using neural network

    Institute of Scientific and Technical Information of China (English)

    杨建刚; 孙斌强

    2002-01-01

    The authors present their analysis of the differential equation dX(t)/dt=AX(t)-XT(t)BX(t)X(t), where A is an unsymmetrical real matrix, B is a positive definite symmetric real matrix, X∈Rn; showing that the equation characterizes a class of continuous type full-feedback artificial neural network; We give the analytic expression of the solution; discuss its asymptotic behavior; and finally present the result showing that, in almost all cases, one and only one of following cases is true. 1. For any initial value X0∈Rn, the solution approximates asymptotically to zero vector. In this case, the real part of each eigenvalue of A is non-positive. 2. For any initial value X0 outside a proper subspace of Rn, the solution approximates asymptotically to a nontrivial constant vector (X0). In this case, the eigenvalue of A with maximal real part is the positive number λ=‖(X0)‖2B and (X0) is the corresponding eigenvector. 3. For any initial value X0 outside a proper subspace of Rn, the solution approximates asymptotically to a non-constant periodic function (X0,t). Then the eigenvalues of A with maximal real part is a pair of conjugate complex numbers which can be computed.

  17. Coordination Frictions and Job Heterogeneity: A Discrete Time Analysis

    DEFF Research Database (Denmark)

    Kennes, John; Le Maire, Christian Daniel

    This paper develops and extends a dynamic, discrete time, job to worker matching model in which jobs are heterogeneous in equilibrium. The key assumptions of this economic environment are (i) matching is directed and (ii) coordination frictions lead to heterogeneous local labor markets. We de- rive...... a number of new theoretical results, which are essential for the empirical application of this type of model to matched employer-employee microdata. First, we o¤er a robust equilibrium concept in which there is a continu- ous dispersion of job productivities and wages. Second, we show that our model can...... be readily solved with continuous exogenous worker heterogene- ity, where high type workers (high outside options and productivity) earn higher wages in high type jobs and are hired at least as frequently to the better job types as low type workers (low outside options and productivity). Third, we...

  18. Exploration of Shape Variation Using Localized Components Analysis

    OpenAIRE

    Alcantara, Dan A; Carmichael, Owen; Harcourt-Smith, Will; Sterner, Kirstin; Frost, Stephen R.; Dutton, Rebecca; Thompson, Paul; Delson, Eric; Amenta, Nina

    2009-01-01

    Localized Components Analysis (LoCA) is a new method for describing surface shape variation in an ensemble of objects using a linear subspace of spatially localized shape components. In contrast to earlier methods, LoCA optimizes explicitly for localized components and allows a flexible trade-off between localized and concise representations, and the formulation of locality is flexible enough to incorporate properties such as symmetry. This paper demonstrates that LoCA can provide intuitive p...

  19. Dynamic Modal Analysis of Vertical Machining Centre Components

    Directory of Open Access Journals (Sweden)

    Anayet U. Patwari

    2009-01-01

    Full Text Available The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software and analyzed by finite element simulation using ABAQUS software to extract the different theoretical mode shape of the components. The model is evaluated and corrected with experimental results by modal testing of the machine components in which the natural frequencies and the shape of vibration modes are analyzed. The analysis resulted in determination of the direction of the maximal compliance of a particular machine component.

  20. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  1. A method for topological analysis of high nuclearity coordination clusters and its application to Mn coordination compounds.

    Science.gov (United States)

    Kostakis, George E; Blatov, Vladislav A; Proserpio, Davide M

    2012-04-21

    A novel method for the topological description of high nuclearity coordination clusters (CCs) was improved and applied to all compounds containing only manganese as a metal center, the data on which are collected in the CCDC (CCDC 5.33 Nov. 2011). Using the TOPOS program package that supports this method, we identified 539 CCs with five or more Mn centers adopting 159 topologically different graphs. In the present database all the Mn CCs are collected and illustrated in such a way that can be searched by cluster topological symbol and nuclearity, compound name and Refcode. The main principles for such an analysis are described herein as well as useful applications of this method.

  2. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses.

  3. Definition of coordinate system for three-dimensional data analysis in the foot and ankle.

    LENUS (Irish Health Repository)

    Green, Connor

    2012-02-01

    BACKGROUND: Three-dimensional data is required to have advanced knowledge of foot and ankle kinematics and morphology. However, studies have been difficult to compare due to a lack of a common coordinate system. Therefore, we present a means to define a coordinate frame in the foot and ankle and its clinical application. MATERIALS AND METHODS: We carried out ten CT scans in anatomically normal feet and segmented them in a general purpose segmentation program for grey value images. 3D binary formatted stereolithography files were then create and imported to a shape analysis program for biomechanics which was used to define a coordinate frame and carry out morphological analysis of the forefoot. RESULTS: The coordinate frame had axes standard deviations of 2.36 which are comparable to axes variability of other joint coordinate systems. We showed a strong correlation between the lengths of the metatarsals within and between the columns of the foot and also among the lesser metatarsal lengths. CONCLUSION: We present a reproducible method for construction of a coordinate system for the foot and ankle with low axes variability. CLINICAL RELEVANCE: To conduct meaningful comparison between multiple subjects the coordinate system must be constant. This system enables such comparison and therefore will aid morphological data collection and improve preoperative planning accuracy.

  4. Engine structures analysis software: Component Specific Modeling (COSMO)

    Science.gov (United States)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  5. Component

    Directory of Open Access Journals (Sweden)

    Tibor Tot

    2011-01-01

    Full Text Available A unique case of metaplastic breast carcinoma with an epithelial component showing tumoral necrosis and neuroectodermal stromal component is described. The tumor grew rapidly and measured 9 cm at the time of diagnosis. No lymph node metastases were present. The disease progressed rapidly and the patient died two years after the diagnosis from a hemorrhage caused by brain metastases. The morphology and phenotype of the tumor are described in detail and the differential diagnostic options are discussed.

  6. Absolute flatness testing of skip-flat interferometry by matrix analysis in polar coordinates.

    Science.gov (United States)

    Han, Zhi-Gang; Yin, Lu; Chen, Lei; Zhu, Ri-Hong

    2016-03-20

    A new method utilizing matrix analysis in polar coordinates has been presented for absolute testing of skip-flat interferometry. The retrieval of the absolute profile mainly includes three steps: (1) transform the wavefront maps of the two cavity measurements into data in polar coordinates; (2) retrieve the profile of the reflective flat in polar coordinates by matrix analysis; and (3) transform the profile of the reflective flat back into data in Cartesian coordinates and retrieve the profile of the sample. Simulation of synthetic surface data has been provided, showing the capability of the approach to achieve an accuracy of the order of 0.01 nm RMS. The absolute profile can be retrieved by a set of closed mathematical formulas without polynomial fitting of wavefront maps or the iterative evaluation of an error function, making the new method more efficient for absolute testing.

  7. Principal component analysis of minimal excitatory postsynaptic potentials.

    Science.gov (United States)

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1998-02-20

    'Minimal' excitatory postsynaptic potentials (EPSPs) are often recorded from central neurones, specifically for quantal analysis. However the EPSPs may emerge from activation of several fibres or transmission sites so that formal quantal analysis may give false results. Here we extended application of the principal component analysis (PCA) to minimal EPSPs. We tested a PCA algorithm and a new graphical 'alignment' procedure against both simulated data and hippocampal EPSPs. Minimal EPSPs were recorded before and up to 3.5 h following induction of long-term potentiation (LTP) in CA1 neurones. In 29 out of 45 EPSPs, two (N=22) or three (N=7) components were detected which differed in latencies, rise time (Trise) or both. The detected differences ranged from 0.6 to 7.8 ms for the latency and from 1.6-9 ms for Trise. Different components behaved differently following LTP induction. Cases were found when one component was potentiated immediately after tetanus whereas the other with a delay of 15-60 min. The immediately potentiated component could decline in 1-2 h so that the two components contributed differently into early (reflections of synchronized quantal releases. In general, the results demonstrate PCA applicability to separate EPSPs into different components and its usefulness for precise analysis of synaptic transmission.

  8. Motion coordination and performance analysis of multiple vehicle systems

    Science.gov (United States)

    Sharma, Vikrant

    In this dissertation, issues related to multiple vehicle systems are studied. First, the issue of vehicular congestion is addressed and its effect on the performance of some systems studied. Motion coordination algorithms for some systems of interest are also developed. The issue of vehicular congestion is addressed by characterizing the effect of increasing the number of vehicles, in a bounded region, on the speed of the vehicles. A multiple vehicle routing problem is considered where vehicles are required to stay velocity-dependent distance away from each other to avoid physical collisions. Optimal solutions to the minimum time routing are characterized and are found to increase with the square root of the number of vehicles in the environment, for different distributions of the sources and destinations of the vehicles. The second issue addressed is that of the effect of vehicular congestion on the delay associated with data delivery in wireless networks where vehicles are used to transport data to increase the wireless capacity of the network. Tight bounds on the associated delay are derived. The next problem addressed is that of covering an arbitrary path-connected two dimensional region, using multiple unmanned aerial vehicles, in minimum time. A constant-factor optimal algorithm is presented for any given initial positions of the vehicles inside the environment. The last problem addressed is that of the deployment of an environment monitoring network of mobile sensors to improve the network lifetime and sensing quality. A distributed algorithm is presented that improves the system's performance starting from an initial deployment.

  9. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  10. Outliers detection in multivariate time series by independent component analysis.

    Science.gov (United States)

    Baragona, Roberto; Battaglia, Francesco

    2007-07-01

    In multivariate time series, outlying data may be often observed that do not fit the common pattern. Occurrences of outliers are unpredictable events that may severely distort the analysis of the multivariate time series. For instance, model building, seasonality assessment, and forecasting may be seriously affected by undetected outliers. The structure dependence of the multivariate time series gives rise to the well-known smearing and masking phenomena that prevent using most outliers' identification techniques. It may be noticed, however, that a convenient way for representing multiple outliers consists of superimposing a deterministic disturbance to a gaussian multivariate time series. Then outliers may be modeled as nongaussian time series components. Independent component analysis is a recently developed tool that is likely to be able to extract possible outlier patterns. In practice, independent component analysis may be used to analyze multivariate observable time series and separate regular and outlying unobservable components. In the factor models framework too, it is shown that independent component analysis is a useful tool for detection of outliers in multivariate time series. Some algorithms that perform independent component analysis are compared. It has been found that all algorithms are effective in detecting various types of outliers, such as patches, level shifts, and isolated outliers, even at the beginning or the end of the stretch of observations. Also, there is no appreciable difference in the ability of different algorithms to display the outlying observations pattern.

  11. Estimation of individual evoked potential components using iterative independent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zouridakis, G; Iyer, D; Diaz, J; Patidar, U [Department of Computer Science, University of Houston, 501 Philip G Hoffman Hall, Houston, TX 77204-3010 (United States)

    2007-09-07

    Independent component analysis (ICA) has been successfully employed in the study of single-trial evoked potentials (EPs). In this paper, we present an iterative temporal ICA methodology that processes multielectrode single-trial EPs, one channel at a time, in contrast to most existing methodologies which are spatial and analyze EPs from all recording channels simultaneously. The proposed algorithm aims at enhancing individual components in an EP waveform in each single trial, and relies on a dynamic template to guide EP estimation. To quantify the performance of this method, we carried out extensive analyses with artificial EPs, using different models for EP generation, including the phase-resetting and the classical additive-signal models, and several signal-to-noise ratios and EP component latency jitters. Furthermore, to validate the technique, we employed actual recordings of the auditory N100 component obtained from normal subjects. Our results with artificial data show that the proposed procedure can provide significantly better estimates of the embedded EP signals compared to plain averaging, while with actual EP recordings, the procedure can consistently enhance individual components in single trials, in all subjects, which in turn results in enhanced average EPs. This procedure is well suited for fast analysis of very large multielectrode recordings in parallel architectures, as individual channels can be processed simultaneously on different processors. We conclude that this method can be used to study the spatiotemporal evolution of specific EP components and may have a significant impact as a clinical tool in the analysis of single-trial EPs.

  12. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  13. Research on Rural Consumer Demand in Hebei Province Based on Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    By selecting the time sequence data concerning influencing factors of rural consumer demand in Hebei Province from 2000 to 2010,this paper uses the principal component analysis method in multiplex econometric statistical analysis,constructs the principal component of consumer demand in Hebei Province,conducts regression on the dependent variable of consumer spending per capita in Hebei Province and the principal component of consumer demand so as to get principal component regression,and then conducts quantitative and qualitative analysis on the principal component.The results show that total output value per capita (yuan),employment rate,and income gap,are correlative with rural residents’ consumer demand in Hebei Province positively;consumer price index,upbringing ratio of children,and one-year interest rate are correlative with rural residents’ consumer demand in Hebei Province negatively;the ratio of supporting the elderly and medical care spending per capita are correlative with rural residents’ consumer demand in Hebei Province positively.The corresponding countermeasures and suggestions are put forward to promote residents’ consumer demand in Hebei Province as follows:develop county economy in Hebei Province and increase rural residents’ consumer demand;use industry to support agriculture and coordinate urban-rural development;improve rural medical care and health system and resolve actual difficulties of the masses.

  14. PRINCIPAL COMPONENT ANALYSIS IN APPLICATION TO OBJECT ORIENTATION

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper proposes a new method based on principal component analysis to find the direction of an object in any pose.Experiments show that this method is fast,can be applied to objects with any pixel distribution and keep the original properties of objects invariant.It is a new application of PCA in image analysis.

  15. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...

  16. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  17. Three-way component analysis : Principles and illustrative application

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Van Mechelen, Iven

    2001-01-01

    Three-way component analysis techniques are designed for descriptive analysis of 3-way data, for example, when data are collected on individuals, in different settings, and on different measures. Such techniques summarize all information in a 3-way data set by summarizing, for each way of the 3-way

  18. Analysis of the influence of the aperture size on the differences of L *a *b chromatic coordinates in a spectrocolorimeter

    Science.gov (United States)

    Medina-Marquez, J.; Balderas-Mata, S. E.; Flores, Jorge L.

    2016-09-01

    The study of the influence of the aperture size over the measurements of the L*a*b chromatic coordinates in spectrocolorimeters, in particular, the Macbeth 7000A ® spectrocolorimeter with an illumination/detection geometry d/8°. This is of importance due to the fact that many industry laboratories use it. This study will give us an invaluable insight of the variations in the measurements of the chromatic coordinates in the visible spectrum range regarding to three different aperture sizes; 2,5cm (AL), 1cm (AM), and 0,5cm (AS). The measurements are carried out on 13 Reference Materials (RMs) or diffusers with different hue under the following metrics; including specular component (SCI), excluding ultraviolet component (UVex), D65 illuminant, and 2° observer. The analysis and quantification of the data were done by the use of statistical tools such as variance analysis and Mendel parameters. In this work the analysis of the latter measurements as well as the methodology that quantifies the accuracy and precision of the method, i.e., repeatability and reproducibility, are presented.

  19. Flexibility and Coordination among Acts of Visualization and Analysis in a Pattern Generalization Activity

    Science.gov (United States)

    Nilsson, Per; Juter, Kristina

    2011-01-01

    This study aims at exploring processes of flexibility and coordination among acts of visualization and analysis in students' attempt to reach a general formula for a three-dimensional pattern generalizing task. The investigation draws on a case-study analysis of two 15-year-old girls working together on a task in which they are asked to calculate…

  20. A principal component analysis of transmission spectra of wine distillates

    Science.gov (United States)

    Rogovaya, M. V.; Sinitsyn, G. V.; Khodasevich, M. A.

    2014-11-01

    A chemometric method of decomposing multidimensional data into a small-sized space, the principal component method, has been applied to the transmission spectra of vintage Moldovan wine distillates. A sample of 42 distillates aged from four to 7 years from six producers has been used to show the possibility of identifying a producer in a two-dimensional space of principal components describing 94.5% of the data-matrix dispersion. Analysis of the loads into the first two principal components has shown that, in order to measure the optical characteristics of the samples under study using only two wavelengths, it is necessary to select 380 and 540 nm, instead of the standard 420 and 520 nm, to describe the variability of the distillates by one principal component or 370 and 520 nm to describe the variability by two principal components.

  1. The application of Principal Component Analysis to materials science data

    Directory of Open Access Journals (Sweden)

    Changwon Suh

    2006-01-01

    Full Text Available The relationship between apparently disparate sets of data is a critical component of interpreting materials' behavior, especially in terms of assessing the impact of the microscopic characteristics of materials on their macroscopic or engineering behavior. In this paper we demonstrate the value of principal component analysis of property data associated with high temperature superconductivity to examine the statistical impact of the materials' intrinsic characteristics on high temperature superconducting behavior

  2. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....... of operation and maintenance. The manufacturing of casted drivetrain components, like the main shaft of the wind turbine, commonly result in many smaller defects through the volume of the component with sizes that depend on the manufacturing method. This paper considers the effect of the initial defect present...

  3. Exploration of shape variation using localized components analysis.

    Science.gov (United States)

    Alcantara, Dan A; Carmichael, Owen; Harcourt-Smith, Will; Sterner, Kirstin; Frost, Stephen R; Dutton, Rebecca; Thompson, Paul; Delson, Eric; Amenta, Nina

    2009-08-01

    Localized Components Analysis (LoCA) is a new method for describing surface shape variation in an ensemble of objects using a linear subspace of spatially localized shape components. In contrast to earlier methods, LoCA optimizes explicitly for localized components and allows a flexible trade-off between localized and concise representations, and the formulation of locality is flexible enough to incorporate properties such as symmetry. This paper demonstrates that LoCA can provide intuitive presentations of shape differences associated with sex, disease state, and species in a broad range of biomedical specimens, including human brain regions and monkey crania.

  4. Principal component analysis of NEXAFS spectra for molybdenum speciation in hydrotreating catalysts; Analise por componentes principais de espectros nexafs na especiacao do molibdenio em catalisadores de hidrotratamento

    Energy Technology Data Exchange (ETDEWEB)

    Faro Junior, Arnaldo da C.; Rodrigues, Victor de O.; Eon, Jean-G., E-mail: farojr@iq.ufrj.b [Universidade Federal do Rio de Janeiro (IQ/UFRJ), RJ (Brazil). Inst. de Quimica; Rocha, Angela S. [Universidade Federal do Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia

    2010-07-01

    Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L{sub III} and L{sub II} edges. The techniques of principal component analysis (PCA) together with a linear combination analysis (LCA) allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination. (author)

  5. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  6. Eliminate indeterminacies of independent component analysis for chemometrics

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An improved method has been proposed to eliminate the indeterminacies of independent component analysis (ICA) for chemomet- rics. Following the arrangement of principal components analysis (PCA), the ICA mixing matrix is selected as signal content indexes, and ICA output are sorted and directed. After many times reputations, independent components (Ics) are paired according to the maximum correlation coefficient, and then the mean values of each IC substitutes the original Ics. This indicates that the ICA inde- terminacies are eliminated. A simulation example is tested to validate this improvement. Finally, a set of experimental LC-MS data is processed without any prior knowledge or specific limitation and the results show that the improved ICA can directly separate the mixed signals in chemometrics, and it is simpler and more reasonable than the simple to use interactive self-modeling mixture analysis (SIMPLISMA).

  7. Coordinating sentence composition with error correction: A multilevel analysis

    Directory of Open Access Journals (Sweden)

    Van Waes, L.

    2011-01-01

    Full Text Available Error analysis involves detecting and correcting discrepancies between the 'text produced so far' (TPSF and the writer's mental representation of what the text should be. While many factors determine the choice of strategy, cognitive effort is a major contributor to this choice. This research shows how cognitive effort during error analysis affects strategy choice and success as measured by a series of online text production measures. We hypothesize that error correction with speech recognition software differs from error correction with keyboard for two reasons. Speech produces auditory commands and, consequently, different error types. The study reported on here measured the effects of (1 mode of presentation (auditory or visual-tactile, (2 error span, whether the error spans more or less than two characters, and (3 lexicality, whether the text error comprises an existing word. A multilevel analysis was conducted to take into account the hierarchical nature of these data. For each variable (interference reaction time, preparation time, production time, immediacy of error correction, and accuracy of error correction, multilevel regression models are presented. As such, we take into account possible disturbing person characteristics while testing the effect of the different conditions and error types at the sentence level.The results show that writers delay error correction more often when the TPSF is read out aloud first. The auditory property of speech seems to free resources for the primary task of writing, i.e. text production. Moreover, the results show that large errors in the TPSF require more cognitive effort, and are solved with a higher accuracy than small errors. The latter also holds for the correction of small errors that result in non-existing words.

  8. Spectral Synthesis via Mean Field approach Independent Component Analysis

    CERN Document Server

    Hu, Ning; Kong, Xu

    2015-01-01

    In this paper, we apply a new statistical analysis technique, Mean Field approach to Bayesian Independent Component Analysis (MF-ICA), on galaxy spectral analysis. This algorithm can compress the stellar spectral library into a few Independent Components (ICs), and galaxy spectrum can be reconstructed by these ICs. Comparing to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, MF-ICA approach offers a large improvement in the efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter-recover for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters from the Sloan Digital Sky Survey galaxies. We find that our MF-ICA method not only can fit the observed galaxy spectra efficiently, but also can recover the physical parameters of galaxies accurately. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find...

  9. Towards successful coordination of electronic health record based-referrals: a qualitative analysis

    Directory of Open Access Journals (Sweden)

    Paul Lindsey A

    2011-07-01

    Full Text Available Abstract Background Successful subspecialty referrals require considerable coordination and interactive communication among the primary care provider (PCP, the subspecialist, and the patient, which may be challenging in the outpatient setting. Even when referrals are facilitated by electronic health records (EHRs (i.e., e-referrals, lapses in patient follow-up might occur. Although compelling reasons exist why referral coordination should be improved, little is known about which elements of the complex referral coordination process should be targeted for improvement. Using Okhuysen & Bechky's coordination framework, this paper aims to understand the barriers, facilitators, and suggestions for improving communication and coordination of EHR-based referrals in an integrated healthcare system. Methods We conducted a qualitative study to understand coordination breakdowns related to e-referrals in an integrated healthcare system and examined work-system factors that affect the timely receipt of subspecialty care. We conducted interviews with seven subject matter experts and six focus groups with a total of 30 PCPs and subspecialists at two tertiary care Department of Veterans Affairs (VA medical centers. Using techniques from grounded theory and content analysis, we identified organizational themes that affected the referral process. Results Four themes emerged: lack of an institutional referral policy, lack of standardization in certain referral procedures, ambiguity in roles and responsibilities, and inadequate resources to adapt and respond to referral requests effectively. Marked differences in PCPs' and subspecialists' communication styles and individual mental models of the referral processes likely precluded the development of a shared mental model to facilitate coordination and successful referral completion. Notably, very few barriers related to the EHR were reported. Conclusions Despite facilitating information transfer between PCPs and

  10. The Effects of Overextraction on Factor and Component Analysis.

    Science.gov (United States)

    Fava, J L; Velicer, W F

    1992-07-01

    The effects of overextracting factors and components within and between the methods of maximum likelihood factor analysis (MLFA) and principal component analysis (PCA) were examined. Computer-simulated data sets were generated to represent a range of factor and component patterns. Saturation (aij = .8, .6 & .4), sample size (N = 75, 150,225,450), and variable-to-component (factor) ratio (p:m = 12:1,6:1, & 4:1) were conditions manipulated. In Study 1, scores based on the incorrect patterns were correlated with correct scores within each method after each overextraction. In Study 2, scores were correlated between the methods of PCAand MLFA after each overextraction. Overextraction had a negative effect, but scores based on strong component and factor patterns displayed robustness to the effects of overextraction. Low item saturation and low sample size resulted in degraded score reproduction. Degradation was strongest for patterns that combined low saturation and low sample size. Component and factor scores were highly correlated even at maximal levels of overextraction. Dissimilarity between score methods was the greatest in conditions that combined low saturation and low sample size. Some guidelines for researchers concerning the effects of overextraction are noted, as well as some cautions in the interpretation of results.

  11. A comparison of independent component analysis algorithms and measures to discriminate between EEG and artifact components.

    Science.gov (United States)

    Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J

    2016-08-01

    Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.

  12. Least Dependent Component Analysis Based on Mutual Information

    CERN Document Server

    Stögbauer, H; Astakhov, S A; Grassberger, P; St\\"ogbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter

    2004-01-01

    We propose to use precise estimators of mutual information (MI) to find least dependent components in a linearly mixed signal. On the one hand this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand it has the advantage, compared to other implementations of `independent' component analysis (ICA) some of which are based on crude approximations for MI, that the numerical values of the MI can be used for: (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output, by comparing the pairwise MIs with those of re-mixed components; (iii) clustering the output according to the residual interdependencies. For the MI estimator we use a recently proposed k-nearest neighbor based algorithm. For time sequences we combine this with delay embedding, in order to take into account non-trivial time correlations. After several tests with artificial data, we apply the resulting MILCA (Mutual Information based ...

  13. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses...... the geometry, material properties and fixed point characteristics to calculate the dimensions and subsequent feasibility of any architectural design. The proposed conceptual design tool provides the possibility for the architect to work with both the aesthetic as well as the structural aspects of architecture...... with a static determinate roof structure modelled by beam components is given. The example outlines the idea of the tool for conceptual design in early phase of a multidisciplinary design process between architecture and structural engineering....

  14. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses...... without jumping from aesthetics to structural digital design tools and back, but to work with both simultaneously and real time. The engineering level of knowledge is incorporated at a conceptual thinking level, i.e. qualitative information is used in stead of using quantitative information. An example...... with a static determinate roof structure modelled by beam components is given. The example outlines the idea of the tool for conceptual design in early phase of a multidisciplinary design process between architecture and structural engineering....

  15. Independent Component Analysis for Filtering Airwaves in Seabed Logging Application

    CERN Document Server

    Ansari, Adeel; Said, Abas B Md; Ansari, Seema

    2013-01-01

    Marine controlled source electromagnetic (CSEM) sensing method used for the detection of hydrocarbons based reservoirs in seabed logging application does not perform well due to the presence of the airwaves (or sea-surface). These airwaves interfere with the signal that comes from the subsurface seafloor and also tend to dominate in the receiver response at larger offsets. The task is to identify these air waves and the way they interact, and to filter them out. In this paper, a popular method for counteracting with the above stated problem scenario is Independent Component Analysis (ICA). Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional or multivariate dataset into its constituent components (sources) that are statistically as independent from each other as possible. ICA-type de-convolution algorithm that is FASTICA is considered for mixed signals de-convolution and considered convenient depending upon the nature of the source and noise model. The res...

  16. Analysis methods for structure reliability of piping components

    Energy Technology Data Exchange (ETDEWEB)

    Schimpfke, T.; Grebner, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Germany)

    2004-07-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  17. Bio-inspired controller for a dexterous prosthetic hand based on Principal Components Analysis.

    Science.gov (United States)

    Matrone, G; Cipriani, C; Secco, E L; Carrozza, M C; Magenes, G

    2009-01-01

    Controlling a dexterous myoelectric prosthetic hand with many degrees of freedom (DoFs) could be a very demanding task, which requires the amputee for high concentration and ability in modulating many different muscular contraction signals. In this work a new approach to multi-DoF control is proposed, which makes use of Principal Component Analysis (PCA) to reduce the DoFs space dimensionality and allow to drive a 15 DoFs hand by means of a 2 DoFs signal. This approach has been tested and properly adapted to work onto the underactuated robotic hand named CyberHand, using mouse cursor coordinates as input signals and a principal components (PCs) matrix taken from the literature. First trials show the feasibility of performing grasps using this method. Further tests with real EMG signals are foreseen.

  18. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  19. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  20. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...

  1. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  2. Sparse principal component analysis in hyperspectral change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Larsen, Rasmus; Vestergaard, Jacob Schack

    2011-01-01

    This contribution deals with change detection by means of sparse principal component analysis (PCA) of simple differences of calibrated, bi-temporal HyMap data. Results show that if we retain only 15 nonzero loadings (out of 126) in the sparse PCA the resulting change scores appear visually very ...

  3. The dynamics of on-line principal component analysis

    NARCIS (Netherlands)

    Biehl, M.; Schlösser, E.

    1998-01-01

    The learning dynamics of an on-line algorithm for principal component analysis is described exactly in the thermodynamic limit by means of coupled ordinary differential equations for a set of order parameters. It is demonstrated that learning is delayed significantly because existing symmetries amon

  4. Convergence of algorithms used for principal component analysis

    Institute of Scientific and Technical Information of China (English)

    张俊华; 陈翰馥

    1997-01-01

    The convergence of algorithms used for principal component analysis is analyzed. The algorithms are proved to converge to eigenvectors and eigenvalues of a matrix A which is the expectation of observed random samples. The conditions required here are considerably weaker than those used in previous work.

  5. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...

  6. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Keshav Kumar

    2017-08-01

    Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  7. [Component analysis on polysaccharides in exocarp of Ginkgo biloba].

    Science.gov (United States)

    Song, G; Xu, A; Chen, H; Wang, X

    1997-09-01

    This paper reports the content and component analysis on polysaccharides in exocarp of Ginkgo biloba. The results show that the content of total saccharides is 89.7%; content of polysaccharides is 84.6%; content of reductic saccharides is 5.1%; the polysaccharides are composed of glucose, fructose, galactose and rhamnose.

  8. How to perform multiblock component analysis in practice

    NARCIS (Netherlands)

    De Roover, Kim; Ceulemans, Eva; Timmerman, Marieke E.

    To explore structural differences and similarities in multivariate multiblock data (e.g., a number of variables have been measured for different groups of subjects, where the data for each group constitute a different data block), researchers have a variety of multiblock component analysis and

  9. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  10. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    Science.gov (United States)

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  11. Features of spatiotemporal groundwater head variation using independent component analysis

    Science.gov (United States)

    Hsiao, Chin-Tsai; Chang, Liang-Cheng; Tsai, Jui-Pin; Chen, You-Cheng

    2017-04-01

    The effect of external stimuli on a groundwater system can be understood by examining the features of spatiotemporal head variations. However, the head variations caused by various external stimuli are mixed signals. To identify the stimuli features of head variations, we propose a systematic approach based on independent component analysis (ICA), frequency analysis, cross-correlation analysis, well-selection strategy, and hourly average head analysis. We also removed the head variations caused by regional stimuli (e.g., rainfall and river stage) from the original head variations of all the wells to better characterize the local stimuli features (e.g., pumping and tide). In the synthetic case study, the derived independent component (IC) features are more consistent with the features of the given recharge and pumping than the features derived from principle component analysis. In a real case study, the ICs associated with regional stimuli highly correlated with field observations, and the effect of regional stimuli on the head variation of all the wells was quantified. In addition, the tide, agricultural, industrial, and spring pumping features were characterized. Therefore, the developed method can facilitate understanding of the features of the spatiotemporal head variation and quantification of the effects of external stimuli on a groundwater system.

  12. ECG signals denoising using wavelet transform and independent component analysis

    Science.gov (United States)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  13. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  14. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  15. Spatial-Temporal Analysis of the Economic and Environmental Coordination Development Degree in Liaoning Province

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2013-01-01

    Full Text Available This study selects 20 indices of economic and environmental conditions over 15 years (1996–2010 for 14 cities in Liaoning province, China. We calculate the economic score and environmental score of each city by processing 4200 data points through SPSS 16.0 and establish synthesis functions between the economy and the environment. For the time dimension, we study the temporal evolution of the economic and environmental coordination development degree . Based on Exploratory Spatial Data Analysis (ESDA techniques and using GeoDa, we calculate Moran's index of local spatial autocorrelation and explore the spatial distribution character of in Liaoning province through a LISA cluster map. As we found in the temporal dimension, the results show that of the 14 cities has been rising for 15 years and that increases year by year, which indicates that the economic and environmental coordination development condition has been improving from disorder to highly coordinated. A smaller gap between economic strength and environmental carrying capacity in Liaoning province exists, which means that economic development and environmental protection remain synchronized. In the spatial dimension, the highly coordinated cities have changed from a scattering to a concentration in the middle-south region of Liaoning province. Poorly coordinated cities are scattered in the northwestern region of Liaoning province.

  16. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    Science.gov (United States)

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  17. Spatiotemporal filtering for regional GPS network in China using independent component analysis

    Science.gov (United States)

    Ming, Feng; Yang, Yuanxi; Zeng, Anmin; Zhao, Bin

    2017-04-01

    Removal of the common mode error (CME) is a routine procedure in postprocessing regional GPS network observations, which is commonly performed using principal component analysis (PCA). PCA decomposes a network time series into a group of modes, where each mode comprises a common temporal function and corresponding spatial response based on second-order statistics (variance and covariance). However, the probability distribution function of a GPS time series is non-Gaussian; therefore, the largest variances do not correspond to the meaningful axes, and the PCA-derived components may not have an obvious physical meaning. In this study, the CME was assumed statistically independent of other errors, and it was extracted using independent component analysis (ICA), which involves higher-order statistics. First, the ICA performance was tested using a simulated example and compared with PCA and stacking methods. The existence of strong local effects on some stations causes significant large spatial responses and, therefore, a strategy based on median and interquartile range statistics was proposed to identify abnormal sites. After discarding abnormal sites, two indices based on the analysis of the spatial responses of all sites in each independent component (east, north, and vertical) were used to define the CME quantitatively. Continuous GPS coordinate time series spanning ˜ 4.5 years obtained from 259 stations of the Tectonic and Environmental Observation Network of Mainland China (CMONOC II) were analyzed using both PCA and ICA methods and their results compared. The results suggest that PCA is susceptible to deriving an artificial spatial structure, whereas ICA separates the CME from other errors reliably. Our results demonstrate that the spatial characteristics of the CME for CMONOC II are not uniform for the east, north, and vertical components, but have an obvious north-south or east-west distribution. After discarding 84 abnormal sites and performing spatiotemporal

  18. Validating the feedback control of intersegmental coordination by fluctuation analysis of disturbed walking.

    Science.gov (United States)

    Funato, Tetsuro; Aoi, Shinya; Tomita, Nozomi; Tsuchiya, Kazuo

    2015-05-01

    A walking motion is established by feedforward control for rhythmic locomotion and feedback control for adapting to environmental variations. To identify the control variables that underlie feedback control, uncontrolled manifold (UCM) analysis has been proposed and adopted for analyzing various movements. UCM analysis searches the controlled variables by comparing the fluctuation size of segmental groups related and unrelated to the movement of candidate variables, based on the assumption that a small fluctuation size indicates a relationship with the feedback control. The present study was based on UCM analysis and evaluated fluctuation size to determine the control mechanism for walking. While walking, the subjects were subjected to floor disturbances at two different frequencies, and the fluctuation sizes of the segmental groups related to characteristic variables were calculated and compared. The characteristic variables evaluated were the motion of the center of mass, limb axis, and head, and the intersegmental coordination of segmental groups with simultaneous coupled movements. Results showed that the fluctuations in intersegmental coordination were almost equally small for any segment, while fluctuations in the other variables were large in certain segments. Moreover, a comparison of the fluctuation sizes among the evaluated variables showed that the fluctuation size for intersegmental coordination was the smallest. These results indicate a possible relationship between intersegmental coordination and the control of walking.

  19. Extension of physical component BFC method for the analysis of free-surface flows coupled with moving boundaries

    Science.gov (United States)

    Lu, D.; Takizawa, A.; Kondo, S.

    A newly developed ``physical component boundary fitted coordinate (PCBFC) method'' is extended for the analysis of free-surface flows coupled with moving boundaries. Extra techniques are employed to deal with the coupling movement of the free surface and moving boundaries. After the validation of the extension by several benchmark problems, the method is successfully applied for the first time to the simulation of overflow-induced vibration of the weir coupled with sloshing of the free-surface liquid.

  20. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    Science.gov (United States)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  1. Components of Program for Analysis of Spectra and Their Testing

    Directory of Open Access Journals (Sweden)

    Ivan Taufer

    2013-11-01

    Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.

  2. Weighted principal component analysis: a weighted covariance eigendecomposition approach

    CERN Document Server

    Delchambre, Ludovic

    2014-01-01

    We present a new straightforward principal component analysis (PCA) method based on the diagonalization of the weighted variance-covariance matrix through two spectral decomposition methods: power iteration and Rayleigh quotient iteration. This method allows one to retrieve a given number of orthogonal principal components amongst the most meaningful ones for the case of problems with weighted and/or missing data. Principal coefficients are then retrieved by fitting principal components to the data while providing the final decomposition. Tests performed on real and simulated cases show that our method is optimal in the identification of the most significant patterns within data sets. We illustrate the usefulness of this method by assessing its quality on the extrapolation of Sloan Digital Sky Survey quasar spectra from measured wavelengths to shorter and longer wavelengths. Our new algorithm also benefits from a fast and flexible implementation.

  3. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  4. Maximum flow-based resilience analysis: From component to system

    Science.gov (United States)

    Jin, Chong; Li, Ruiying; Kang, Rui

    2017-01-01

    Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135

  5. Fatigue Reliability Analysis of Wind Turbine Cast Components

    Directory of Open Access Journals (Sweden)

    Hesam Mirzaei Rafsanjani

    2017-04-01

    Full Text Available The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability to be used for decision-making if additional cost considerations are added. In this paper, a statistical approach is presented based on statistical hypothesis testing and analysis of covariance (ANCOVA which can be applied to compare different groups (manufacturers, suppliers, test facilities, etc. and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress for fatigue assessment are estimated based on the statistical analyses and by introduction of physical, model and statistical uncertainties used for the illustration of reliability assessment.

  6. Polynomial analysis of canopy spectra and biochemical component content inversion

    Institute of Scientific and Technical Information of China (English)

    YAN Chunyan; LIU Qiang; NIU Zheng; WANG Jihua; HUANG Wenjiang; LIU Liangyun

    2005-01-01

    A polynomial expression model was developed in this paper to describe directional canopy spectra, and the decomposition of the polynomial expression was used as a tool for retrieving biochemical component content from canopy multi-angle spectra. First, the basic formula of the polynomial expression was introduced and the physical meaning of its terms and coefficients was discussed. Based on this analysis, a complete polynomial expression model and its decomposition method were given. By decomposing the canopy spectra simulated with SAILH model, it shows that the polynomial expression can not only fit well the canopy spectra, but also show the contribution of every order scattering to the whole reflectance. Taking the first scattering coefficients a10 and a01 for example, the test results show that the polynomial coefficients reflect very well the hot spot phenomenon and the effects of viewing angles, LAI and leaf inclination angle on canopy spectra. By coupling the polynomial expression with leaf model PROSPECT, a canopy biochemical component content inversion model was given. In the simulated test, the canopy multi-angle spectra were simulated by two different models, SAILH and 4-SCALE respectively, then the biochemical component content was retrieved by inverting the coupled polynomial expression + PROSPECT model. Results of the simulated test are promising, and when applying the algorithm to measured corn canopy multi-angle spectra, we also get relatively accurate chlorophyll content. It shows that the polynomial analysis provides a new method to get biochemical component content independent of any specific canopy model.

  7. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several met

  8. Angles-Only Initial Relative Orbit Determination Performance Analysis using Cylindrical Coordinates

    Science.gov (United States)

    Geller, David K.; Lovell, T. Alan

    2016-09-01

    The solution of the initial relative orbit determination problem using angles-only measurements is important for orbital proximity operations, satellite inspection and servicing, and the identification of unknown space objects in similar orbits. In this paper, a preliminary relative orbit determination performance analysis is conducted utilizing the linearized relative orbital equations of motion in cylindrical coordinates. The relative orbital equations of motion in cylindrical coordinates are rigorously derived in several forms included the exact nonlinear two-body differential equations of motion, the linear-time-varying differential equations of motion for an elliptical orbit chief, and the linear-time-invariant differential equations of motion for a circular orbit chief. Using the nonlinear angles-only measurement equation in cylindrical coordinates, evidence of full-relative-state observability is found, contrary to the range observability problem exhibited in Cartesian coordinates. Based on these results, a geometric approach to assess initial relative orbit determination performance is formulated. To facilitate a better understanding of the problem, the focus is on the 2-dimensional initial orbit determination problem. The results clearly show the dependence of the relative orbit determination performance on the geometry of the relative motion and on the time-interval between observations. Analysis is conducted for leader-follower orbits and flyby orbits where the deputy passes directly above or below the chief.

  9. Thermal Inspection of a Composite Fuselage Section Using a Fixed Eigenvector Principal Component Analysis Method

    Science.gov (United States)

    Zalameda, Joseph N.; Bolduc, Sean; Harman, Rebecca

    2017-01-01

    A composite fuselage aircraft forward section was inspected with flash thermography. The fuselage section is 24 feet long and approximately 8 feet in diameter. The structure is primarily configured with a composite sandwich structure of carbon fiber face sheets with a Nomex(Trademark) honeycomb core. The outer surface area was inspected. The thermal data consisted of 477 data sets totaling in size of over 227 Gigabytes. Principal component analysis (PCA) was used to process the data sets for substructure and defect detection. A fixed eigenvector approach using a global covariance matrix was used and compared to a varying eigenvector approach. The fixed eigenvector approach was demonstrated to be a practical analysis method for the detection and interpretation of various defects such as paint thickness variation, possible water intrusion damage, and delamination damage. In addition, inspection considerations are discussed including coordinate system layout, manipulation of the fuselage section, and the manual scanning technique used for full coverage.

  10. Watermark Detection and Extraction Using Independent Component Analysis Method

    Directory of Open Access Journals (Sweden)

    Yu Dan

    2002-01-01

    Full Text Available This paper proposes a new image watermarking technique, which adopts Independent Component Analysis (ICA for watermark detection and extraction process (i.e., dewatermarking. Watermark embedding is performed in the spatial domain of the original image. Watermark can be successfully detected during the Principle Component Analysis (PCA whitening stage. A nonlinear robust batch ICA algorithm, which is able to efficiently extract various temporally correlated sources from their observed linear mixtures, is used for blind watermark extraction. The evaluations illustrate the validity and good performance of the proposed watermark detection and extraction scheme based on ICA. The accuracy of watermark extraction depends on the statistical independence between the original, key and watermark images and the temporal correlation of these sources. Experimental results demonstrate that the proposed system is robust to several important image processing attacks, including some geometrical transformations—scaling, cropping and rotation, quantization, additive noise, low pass filtering, multiple marks, and collusion.

  11. Component-based analysis of embedded control applications

    DEFF Research Database (Denmark)

    Angelov, Christo K.; Guan, Wei; Marian, Nicolae

    2011-01-01

    presents an analysis technique that can be used to validate COMDES design models in SIMULINK. It is based on a transformation of the COMDES design model into a SIMULINK analysis model, which preserves the functional and timing behaviour of the application. This technique has been employed to develop...... configuration of applications from validated design models and trusted components. This design philosophy has been instrumental for developing COMDES—a component-based framework for distributed embedded control systems. A COMDES application is conceived as a network of embedded actors that are configured from...... instances of reusable, executable components—function blocks (FBs). System actors operate in accordance with a timed multitasking model of computation, whereby I/O signals are exchanged with the controlled plant at precisely specified time instants, resulting in the elimination of I/O jitter. The paper...

  12. Principal Component Analysis of Thermal Dorsal Hand Vein Pattern Architecture

    Directory of Open Access Journals (Sweden)

    V. Krishna Sree

    2012-12-01

    Full Text Available The quest of providing more secure identification system has lead to rise in developing biometric systems. Biometrics such as face, fingerprint and iris have been developed extensively for human identification purpose and also to provide authentic input to many security systems in the past few decades. Dorsal hand vein pattern is an emerging biometric which is unique to every individual. In this study principal component analysis is used to obtain Eigen vein patterns which are low dimensional representation of vein pattern features. The extraction of the vein patterns was obtained by morphological techniques. Noise reduction filters are used to enhance the vein patterns. Principle component analysis is able to reduce the 2-dimensional image database into 1-dimensional Eigen vectors and able to identify all the dorsal hand pattern images.

  13. QUALITY CONTROL OF SEMICONDUCTOR PACKAGING BASED ON PRINCIPAL COMPONENTS ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    5 critical quality characteristics must be controlled in the surface mount and wire-bond process in semiconductor packaging. And these characteristics are correlated with each other. So the principal components analysis(PCA) is used in the analysis of the sample data firstly. And then the process is controlled with hotelling T2 control chart for the first several principal components which contain sufficient information. Furthermore, a software tool is developed for this kind of problems. And with sample data from a surface mounting device(SMD) process, it is demonstrated that the T2 control chart with PCA gets the same conclusion as without PCA, but the problem is transformed from high-dimensional one to a lower dimensional one, i.e., from 5 to 2 in this demonstration.

  14. Microcalorimeter pulse analysis by means of principle component decomposition

    CERN Document Server

    de Vries, C P; van der Kuur, J; Gottardi, L; Akamatsu, H

    2016-01-01

    The X-ray integral field unit for the Athena mission consists of a microcalorimeter transition edge sensor pixel array. Incoming photons generate pulses which are analyzed in terms of energy, in order to assemble the X-ray spectrum. Usually this is done by means of optimal filtering in either time or frequency domain. In this paper we investigate an alternative method by means of principle component analysis. This method attempts to find the main components of an orthogonal set of functions to describe the data. We show, based on simulations, what the influence of various instrumental effects is on this type of analysis. We compare analyses both in time and frequency domain. Finally we apply these analyses on real data, obtained via frequency domain multiplexing readout.

  15. Relationship among asphalt component,viscosity and adhesion in triangular coordinate system%三角形坐标系下沥青组分与粘度、粘附性关系

    Institute of Scientific and Technical Information of China (English)

    傅珍; 延西利; 蔡婷; 马峰; 汪林兵

    2014-01-01

    为研究沥青的组分和组分构成关系对沥青技术性质的影响,选用13种道路石油沥青和2种集料,采用四组分、表观粘度与粘附性试验研究了沥青组分与粘度、粘附性之间的关系。提出了三角形坐标系下的沥青四组分试验结果表征方法,采用沥青四组分数据绘制沥青特征三角形,利用惯性矩反映沥青的组分构成特征,分析了沥青四组分数据构成的三棱锥几何特性与粘度、粘附性的联系。试验结果表明:除沥青的组成成分之外,沥青组成成分的结构差异也会影响其粘度及集料粘附性;对于不同品牌而相同标号的沥青,在三角形坐标系下,沥青组分特征三角形的惯性矩越大,与集料粘附性越好。%In order to assess the influences of asphalt component and composition on asphalt technical properties,thirteen types of pavement petroleum asphalts and two kinds of typical aggregates were selected.The relationship among asphalt component,viscosity and adhesion was investigated by using tests of four-component, apparent viscosity and adhesion. The characterization method of asphalt four-component test result by using triangular coordinate system was put forward.The asphalt characteristic triangle was drawn with four-component data.The characteristic of asphalt composition was represented by inertia moment.The asphalt pyramid was drawn based on asphalt four-component data and the relationship among its geometry characteristics,viscosity and adhesion was analyzed.Analysis result indicates that except asphalt components,its composition differences also have influences on asphalt viscosity and aggregate adhesion.For the asphalts with same penetration grade and different brands, triangular coordinate analysis result shows the aggregate adhesion increases with the increase of the inertia moment of asphalt four-component characteristic triangle.6 tabs,6 figs,18 refs.

  16. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations

    Science.gov (United States)

    1992-02-01

    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue

  17. A Constrained EM Algorithm for Independent Component Analysis

    OpenAIRE

    Welling, Max; Weber, Markus

    2001-01-01

    We introduce a novel way of performing independent component analysis using a constrained version of the expectation-maximization (EM) algorithm. The source distributions are modeled as D one-dimensional mixtures of gaussians. The observed data are modeled as linear mixtures of the sources with additive, isotropic noise. This generative model is fit to the data using constrained EM. The simpler “soft-switching” approach is introduced, which uses only one parameter to decide on the sub- or sup...

  18. Primary component analysis method and reduction of seismicity parameters

    Institute of Scientific and Technical Information of China (English)

    WANG Wei; MA Qin-zhong; LIN Ming-zhou; WU Geng-feng; WU Shao-chun

    2005-01-01

    In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (ML≥3.0), b-value, 7-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However,the synthesis parameter W showed obvious anomalies before 13 earthquakes (MS>5.8) occurred in North China,which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.

  19. Modular and coordinated expression of immune system regulatory and signaling components in the developing and adult nervous system.

    Science.gov (United States)

    Monzón-Sandoval, Jimena; Castillo-Morales, Atahualpa; Crampton, Sean; McKelvey, Laura; Nolan, Aoife; O'Keeffe, Gerard; Gutierrez, Humberto

    2015-01-01

    During development, the nervous system (NS) is assembled and sculpted through a concerted series of neurodevelopmental events orchestrated by a complex genetic programme. While neural-specific gene expression plays a critical part in this process, in recent years, a number of immune-related signaling and regulatory components have also been shown to play key physiological roles in the developing and adult NS. While the involvement of individual immune-related signaling components in neural functions may reflect their ubiquitous character, it may also reflect a much wider, as yet undescribed, genetic network of immune-related molecules acting as an intrinsic component of the neural-specific regulatory machinery that ultimately shapes the NS. In order to gain insights into the scale and wider functional organization of immune-related genetic networks in the NS, we examined the large scale pattern of expression of these genes in the brain. Our results show a highly significant correlated expression and transcriptional clustering among immune-related genes in the developing and adult brain, and this correlation was the highest in the brain when compared to muscle, liver, kidney and endothelial cells. We experimentally tested the regulatory clustering of immune system (IS) genes by using microarray expression profiling in cultures of dissociated neurons stimulated with the pro-inflammatory cytokine TNF-alpha, and found a highly significant enrichment of immune system-related genes among the resulting differentially expressed genes. Our findings strongly suggest a coherent recruitment of entire immune-related genetic regulatory modules by the neural-specific genetic programme that shapes the NS.

  20. Multiuser detection and independent component analysis-Progress and perspective

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The latest progress in the multiuser detection and independent component analysis (ICA) is reviewed systematically. Then two novel classes of multiuser detection methods based on ICA algorithms and feedforward neural networks are proposed. Theoretical analysis and computer simulation show that ICA algorithms are effective to detect multiuser signals in code-division multiple-access (CDMA) system. The performances of these methods are not identical entirely in various channels, but all of them are robust, efficient, fast and suitable for real-time implementations.

  1. Structure analysis of active components of traditional Chinese medicines

    DEFF Research Database (Denmark)

    Zhang, Wei; Sun, Qinglei; Liu, Jianhua

    2013-01-01

    Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... samples. NMR, on the other hand, provides not only information of primary structures but also information of higher order structures, complementing the components structure analysis by HPLC-MS. The most recent progress in the analysis of the commonly used TCMs will be summarized...

  2. Extension of a System Level Tool for Component Level Analysis

    Science.gov (United States)

    Majumdar, Alok; Schallhorn, Paul

    2002-01-01

    This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.

  3. Spectral Synthesis via Mean Field approach to Independent Component Analysis

    Science.gov (United States)

    Hu, Ning; Su, Shan-Shan; Kong, Xu

    2016-03-01

    We apply a new statistical analysis technique, the Mean Field approach to Independent Component Analysis (MF-ICA) in a Bayseian framework, to galaxy spectral analysis. This algorithm can compress a stellar spectral library into a few Independent Components (ICs), and the galaxy spectrum can be reconstructed by these ICs. Compared to other algorithms which decompose a galaxy spectrum into a combination of several simple stellar populations, the MF-ICA approach offers a large improvement in efficiency. To check the reliability of this spectral analysis method, three different methods are used: (1) parameter recovery for simulated galaxies, (2) comparison with parameters estimated by other methods, and (3) consistency test of parameters derived with galaxies from the Sloan Digital Sky Survey. We find that our MF-ICA method can not only fit the observed galaxy spectra efficiently, but can also accurately recover the physical parameters of galaxies. We also apply our spectral analysis method to the DEEP2 spectroscopic data, and find it can provide excellent fitting results for low signal-to-noise spectra.

  4. NONLINEAR DATA RECONCILIATION METHOD BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In the industrial process situation, principal component analysis (PCA) is a general method in data reconciliation.However, PCA sometime is unfeasible to nonlinear feature analysis and limited in application to nonlinear industrial process.Kernel PCA (KPCA) is extension of PCA and can be used for nonlinear feature analysis.A nonlinear data reconciliation method based on KPCA is proposed.The basic idea of this method is that firstly original data are mapped to high dimensional feature space by nonlinear function, and PCA is implemented in the feature space.Then nonlinear feature analysis is implemented and data are reconstructed by using the kernel.The data reconciliation method based on KPCA is applied to ternary distillation column.Simulation results show that this method can filter the noise in measurements of nonlinear process and reconciliated data can represent the true information of nonlinear process.

  5. Correlation and principal component analysis in ceramic tiles characterization

    Directory of Open Access Journals (Sweden)

    Podunavac-Kuzmanović Sanja O.

    2015-01-01

    Full Text Available The present study deals with the analysis of the characteristics of ceramic wall and floor tiles on the basis of their quality parameters: breaking force, flexural strenght, absorption and shrinking. Principal component analysis was applied in order to detect potential similarities and dissimilarities among the analyzed tile samples, as well as the firing regimes. Correlation analysis was applied in order to find correlations among the studied quality parameters of the tiles. The obtained results indicate particular differences between the samples on the basis of the firing regimes. However, the correlation analysis points out that there is no statistically significant correlation among the quality parameters of the studied samples of the wall and floor ceramic tiles.[Projekat Ministarstva nauke Republike Srbije, br. 172012 i br. III 45008

  6. Study on failure analysis of array chip components in IRFPA

    Science.gov (United States)

    Zhang, Xiaonan; He, Yingjie; Li, Jinping

    2016-10-01

    Infrared focal plane array detector has advantages of strong anti-interference ability and high sensitivity. Its size, weight and power dissipation has been noticeably decreased compared to the conventional infrared imaging system. With the development of the detector manufacture technology and the cost reduction, IRFPA detector has been widely used in the military and commercial fields. Due to the restricting of array chip manufacturing process and material defects, the fault phenomenon such as cracking, bad pixel and abnormal output was showed during the test, which restricts the performance of the infrared detector imaging system, and these effects are gradually intensified with the expanding of the focal plane array size and the shrinking of the pixel size. Based on the analysis of the test results for the infrared detector array chip components, the fault phenomenon was classified. The main cause of the chip component failure is chip cracking, bad pixel and abnormal output. The reason of the failure has been analyzed deeply. According to analyze the mechanism of the failure, a series of measures which contain filtrating materials and optimizing the manufacturing process of array chip components were used to improve the performance of the chip components and the test pass rate, which is used to meet the needs of the detector performance.

  7. A comparative study of principal component analysis and independent component analysis in eddy current pulsed thermography data processing

    Science.gov (United States)

    Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W. L.

    2013-10-01

    Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection.

  8. Analysis of the Terraced Construction Effect on Ecological  Economic Coordinated Development in the Southwest China

    Directory of Open Access Journals (Sweden)

    Ping Liang

    2014-06-01

    Full Text Available This paper through the correlation analysis and efficiency analysis, studied the differences between slope croplands and terraces on soil, water and fertilizer conservation in Southwest China. Meanwhile, it carried out a quantum chemical calculation of the ecological, economic and social benefits brought to the local Hani residents by terrace construction and concluded that terraced fields can promote the mutual coordination effect of water and fertilizer. Thus, it is beneficial for crop growth and development and increase grain output, to achieve the purpose of efficient water use and stable high yield. The results show that with the improvement of Hani ecological environment and the increase of land utilization, local ecological, economic and social benefits are significantly increased, which has laid a solid foundation for the agricultural industrialization development and implementation of sustainable agricultural development strategy in Hani. And also, it provides guarantee for the ecological, economic and social coordinated development of the county.

  9. Interaction Analysis of a Two-Component System Using Nanodiscs.

    Directory of Open Access Journals (Sweden)

    Patrick Hörnschemeyer

    Full Text Available Two-component systems are the major means by which bacteria couple adaptation to environmental changes. All utilize a phosphorylation cascade from a histidine kinase to a response regulator, and some also employ an accessory protein. The system-wide signaling fidelity of two-component systems is based on preferential binding between the signaling proteins. However, information on the interaction kinetics between membrane embedded histidine kinase and its partner proteins is lacking. Here, we report the first analysis of the interactions between the full-length membrane-bound histidine kinase CpxA, which was reconstituted in nanodiscs, and its cognate response regulator CpxR and accessory protein CpxP. Using surface plasmon resonance spectroscopy in combination with interaction map analysis, the affinity of membrane-embedded CpxA for CpxR was quantified, and found to increase by tenfold in the presence of ATP, suggesting that a considerable portion of phosphorylated CpxR might be stably associated with CpxA in vivo. Using microscale thermophoresis, the affinity between CpxA in nanodiscs and CpxP was determined to be substantially lower than that between CpxA and CpxR. Taken together, the quantitative interaction data extend our understanding of the signal transduction mechanism used by two-component systems.

  10. Assessment of cluster yield components by image analysis.

    Science.gov (United States)

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  11. Identification of a cis-regulatory element by transient analysis of co-ordinately regulated genes

    Directory of Open Access Journals (Sweden)

    Allan Andrew C

    2008-07-01

    Full Text Available Abstract Background Transcription factors (TFs co-ordinately regulate target genes that are dispersed throughout the genome. This co-ordinate regulation is achieved, in part, through the interaction of transcription factors with conserved cis-regulatory motifs that are in close proximity to the target genes. While much is known about the families of transcription factors that regulate gene expression in plants, there are few well characterised cis-regulatory motifs. In Arabidopsis, over-expression of the MYB transcription factor PAP1 (PRODUCTION OF ANTHOCYANIN PIGMENT 1 leads to transgenic plants with elevated anthocyanin levels due to the co-ordinated up-regulation of genes in the anthocyanin biosynthetic pathway. In addition to the anthocyanin biosynthetic genes, there are a number of un-associated genes that also change in expression level. This may be a direct or indirect consequence of the over-expression of PAP1. Results Oligo array analysis of PAP1 over-expression Arabidopsis plants identified genes co-ordinately up-regulated in response to the elevated expression of this transcription factor. Transient assays on the promoter regions of 33 of these up-regulated genes identified eight promoter fragments that were transactivated by PAP1. Bioinformatic analysis on these promoters revealed a common cis-regulatory motif that we showed is required for PAP1 dependent transactivation. Conclusion Co-ordinated gene regulation by individual transcription factors is a complex collection of both direct and indirect effects. Transient transactivation assays provide a rapid method to identify direct target genes from indirect target genes. Bioinformatic analysis of the promoters of these direct target genes is able to locate motifs that are common to this sub-set of promoters, which is impossible to identify with the larger set of direct and indirect target genes. While this type of analysis does not prove a direct interaction between protein and DNA

  12. State Inspection for Transmission Lines Based on Independent Component Analysis

    Institute of Scientific and Technical Information of China (English)

    REN Li-jia; JIANG Xiu-chen; SHENG Ge-hao; YANG Wei-wei

    2009-01-01

    Monitoring transmission towers is of great importance to prevent severe thefts on them and ensure the reliability and safety of the power grid operation. Independent component analysis (ICA) is a method for finding underlying factors or components from multivariate statistical data based on dimension reduction methods, and it is applicable to extract the non-stationary signals. FastICA based on negentropy is presented to effectively extract and separate the vibration signals caused by human activity in this paper. A new method combined empirical mode decomposition (EMD) technique with the adaptive threshold method is applied to extract the vibration pulses, and suppress the interference signals. The practical tests demonstrate that the method proposed in the paper is effective in separating and extracting the vibration signals.

  13. A principal components analysis of Rorschach aggression and hostility variables.

    Science.gov (United States)

    Katko, Nicholas J; Meyer, Gregory J; Mihura, Joni L; Bombel, George

    2010-11-01

    We examined the structure of 9 Rorschach variables related to hostility and aggression (Aggressive Movement, Morbid, Primary Process Aggression, Secondary Process Aggression, Aggressive Content, Aggressive Past, Strong Hostility, Lesser Hostility) in a sample of medical students (N= 225) from the Johns Hopkins Precursors Study (The Johns Hopkins University, 1999). Principal components analysis revealed 2 dimensions accounting for 58% of the total variance. These dimensions extended previous findings for a 2-component model of Rorschach aggressive imagery that had been identified using just 5 or 6 marker variables (Baity & Hilsenroth, 1999; Liebman, Porcerelli, & Abell, 2005). In light of this evidence, we draw an empirical link between the historical research literature and current studies of Rorschach aggression and hostility that helps organize their findings. We also offer suggestions for condensing the array of aggression-related measures to simplify Rorschach aggression scoring.

  14. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  15. Coordinated regulation of the ESCRT-III component CHMP4C by the chromosomal passenger complex and centralspindlin during cytokinesis

    Science.gov (United States)

    Capalbo, Luisa; Mela, Ioanna; Abad, Maria Alba; Jeyaprakash, A. Arockia; Edwardson, J. Michael

    2016-01-01

    The chromosomal passenger complex (CPC)—composed of Aurora B kinase, Borealin, Survivin and INCENP—surveys the fidelity of genome segregation throughout cell division. The CPC has been proposed to prevent polyploidy by controlling the final separation (known as abscission) of the two daughter cells via regulation of the ESCRT-III CHMP4C component. The molecular details are, however, still unclear. Using atomic force microscopy, we show that CHMP4C binds to and remodels membranes in vitro. Borealin prevents the association of CHMP4C with membranes, whereas Aurora B interferes with CHMP4C's membrane remodelling activity. Moreover, we show that CHMP4C phosphorylation is not required for its assembly into spiral filaments at the abscission site and that two distinctly localized pools of phosphorylated CHMP4C exist during cytokinesis. We also characterized the CHMP4C interactome in telophase cells and show that the centralspindlin complex associates preferentially with unphosphorylated CHMP4C in cytokinesis. Our findings indicate that gradual dephosphorylation of CHMP4C triggers a ‘relay’ mechanism between the CPC and centralspindlin that regulates the timely distribution and activation of CHMP4C for the execution of abscission. PMID:27784789

  16. The Conserved Actinobacterial Two-Component System MtrAB Coordinates Chloramphenicol Production with Sporulation in Streptomyces venezuelae NRRL B-65442

    Directory of Open Access Journals (Sweden)

    Nicolle F. Som

    2017-06-01

    Full Text Available Streptomyces bacteria make numerous secondary metabolites, including half of all known antibiotics. Production of antibiotics is usually coordinated with the onset of sporulation but the cross regulation of these processes is not fully understood. This is important because most Streptomyces antibiotics are produced at low levels or not at all under laboratory conditions and this makes large scale production of these compounds very challenging. Here, we characterize the highly conserved actinobacterial two-component system MtrAB in the model organism Streptomyces venezuelae and provide evidence that it coordinates production of the antibiotic chloramphenicol with sporulation. MtrAB are known to coordinate DNA replication and cell division in Mycobacterium tuberculosis where TB-MtrA is essential for viability but MtrB is dispensable. We deleted mtrB in S. venezuelae and this resulted in a global shift in the metabolome, including constitutive, higher-level production of chloramphenicol. We found that chloramphenicol is detectable in the wild-type strain, but only at very low levels and only after it has sporulated. ChIP-seq showed that MtrA binds upstream of DNA replication and cell division genes and genes required for chloramphenicol production. dnaA, dnaN, oriC, and wblE (whiB1 are DNA binding targets for MtrA in both M. tuberculosis and S. venezuelae. Intriguingly, over-expression of TB-MtrA and gain of function TB- and Sv-MtrA proteins in S. venezuelae also switched on higher-level production of chloramphenicol. Given the conservation of MtrAB, these constructs might be useful tools for manipulating antibiotic production in other filamentous actinomycetes.

  17. Principal components null space analysis for image and video classification.

    Science.gov (United States)

    Vaswani, Namrata; Chellappa, Rama

    2006-07-01

    We present a new classification algorithm, principal component null space analysis (PCNSA), which is designed for classification problems like object recognition where different classes have unequal and nonwhite noise covariance matrices. PCNSA first obtains a principal components subspace (PCA space) for the entire data. In this PCA space, it finds for each class "i," an Mi-dimensional subspace along which the class' intraclass variance is the smallest. We call this subspace an approximate null space (ANS) since the lowest variance is usually "much smaller" than the highest. A query is classified into class "i" if its distance from the class' mean in the class' ANS is a minimum. We derive upper bounds on classification error probability of PCNSA and use these expressions to compare classification performance of PCNSA with that of subspace linear discriminant analysis (SLDA). We propose a practical modification of PCNSA called progressive-PCNSA that also detects "new" (untrained classes). Finally, we provide an experimental comparison of PCNSA and progressive PCNSA with SLDA and PCA and also with other classification algorithms-linear SVMs, kernel PCA, kernel discriminant analysis, and kernel SLDA, for object recognition and face recognition under large pose/expression variation. We also show applications of PCNSA to two classification problems in video--an action retrieval problem and abnormal activity detection.

  18. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  19. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  20. Eliminating the Influence of Harmonic Components in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic......Operational modal analysis is used for determining the modal parameters of structures for which the input forces cannot be measured. However, the algorithms used assume that the input forces are stochastic in nature. While this is often the case for civil engineering structures, mechanical...

  1. Independent Component Analysis to Detect Clustered Microcalcification Breast Cancers

    Directory of Open Access Journals (Sweden)

    R. Gallardo-Caballero

    2012-01-01

    current reproducible studies on the same mammogram set. This proposal is mainly based on the use of extracted image features obtained by independent component analysis, but we also study the inclusion of the patient’s age as a nonimage feature which requires no human expertise. Our system achieves an average of 2.55 false positives per image at a sensitivity of 81.8% and 4.45 at a sensitivity of 91.8% in diagnosing the BCRP_CALC_1 subset of DDSM.

  2. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  3. Advances in independent component analysis and learning machines

    CERN Document Server

    Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko

    2015-01-01

    In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t

  4. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)

    1996-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  5. Independent component analysis applications on THz sensing and imaging

    Science.gov (United States)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  6. Independent component analysis of Alzheimer's DNA microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Vanderburg Charles R

    2009-01-01

    Full Text Available Abstract Background Gene microarray technology is an effective tool to investigate the simultaneous activity of multiple cellular pathways from hundreds to thousands of genes. However, because data in the colossal amounts generated by DNA microarray technology are usually complex, noisy, high-dimensional, and often hindered by low statistical power, their exploitation is difficult. To overcome these problems, two kinds of unsupervised analysis methods for microarray data: principal component analysis (PCA and independent component analysis (ICA have been developed to accomplish the task. PCA projects the data into a new space spanned by the principal components that are mutually orthonormal to each other. The constraint of mutual orthogonality and second-order statistics technique within PCA algorithms, however, may not be applied to the biological systems studied. Extracting and characterizing the most informative features of the biological signals, however, require higher-order statistics. Results ICA is one of the unsupervised algorithms that can extract higher-order statistical structures from data and has been applied to DNA microarray gene expression data analysis. We performed FastICA method on DNA microarray gene expression data from Alzheimer's disease (AD hippocampal tissue samples and consequential gene clustering. Experimental results showed that the ICA method can improve the clustering results of AD samples and identify significant genes. More than 50 significant genes with high expression levels in severe AD were extracted, representing immunity-related protein, metal-related protein, membrane protein, lipoprotein, neuropeptide, cytoskeleton protein, cellular binding protein, and ribosomal protein. Within the aforementioned categories, our method also found 37 significant genes with low expression levels. Moreover, it is worth noting that some oncogenes and phosphorylation-related proteins are expressed in low levels. In

  7. Illustration and analysis of a coordinated approach to an effective forensic trace evidence capability.

    Science.gov (United States)

    Stoney, David A; Stoney, Paul L

    2015-08-01

    An effective trace evidence capability is defined as one that exploits all useful particle types, chooses appropriate technologies to do so, and directly integrates the findings with case-specific problems. Limitations of current approaches inhibit the attainment of an effective capability and it has been strongly argued that a new approach to trace evidence analysis is essential. A hypothetical case example is presented to illustrate and analyze how forensic particle analysis can be used as a powerful practical tool in forensic investigations. The specifics in this example, including the casework investigation, laboratory analyses, and close professional interactions, provide focal points for subsequent analysis of how this outcome can be achieved. This leads to the specification of five key elements that are deemed necessary and sufficient for effective forensic particle analysis: (1) a dynamic forensic analytical approach, (2) concise and efficient protocols addressing particle combinations, (3) multidisciplinary capabilities of analysis and interpretation, (4) readily accessible external specialist resources, and (5) information integration and communication. A coordinating role, absent in current approaches to trace evidence analysis, is essential to achieving these elements. However, the level of expertise required for the coordinating role is readily attainable. Some additional laboratory protocols are also essential. However, none of these has greater staffing requirements than those routinely met by existing forensic trace evidence practitioners. The major challenges that remain are organizational acceptance, planning and implementation.

  8. Student construction of differential length elements in multivariable coordinate systems: A symbolic forms analysis

    Science.gov (United States)

    Thompson, John; Schermerhorn, Benjamin

    2017-01-01

    Analysis of properties of physical quantities represented by vector fields often involves symmetries and spatial relationships best expressed in non-Cartesian coordinate systems. Many important quantities are determined by integrals that can involve multivariable vector differential quantities. Four pairs of students in junior-level Electricity and Magnetism (E&M) were interviewed to investigate their understanding of the structure of non-Cartesian coordinate systems and the associated differential elements. Pairs were asked to construct differential length elements for an unconventional spherical coordinate system. In order to explore how student conceptual understanding interacts with their understanding of the specific structures of these expressions, a symbolic forms framework was used. Analysis of student reasoning revealed both known and novel forms as well as the general progression of students--use and combination of symbol templates during the construction process. Each group invoked and combined symbolic forms in a similar sequence. Difficulties with the construction of expressions seem to be related almost exclusively to the conceptual schema (e.g., neglecting the role of projection) rather than with symbol templates. Supported in part by NSF Grant PHY-1405726.

  9. Hamiltonian Analysis of 3-Dimensional Connection Dynamics in Bondi-like Coordinates

    CERN Document Server

    Huang, Chao-Guang

    2016-01-01

    The Hamiltonian analysis for a 3-dimensional $SO(1,1)\\times T_+$-connection dynamics is conducted in a Bondi-like coordinate system.A null coframe with 5 independent variables and 9 connection coefficients are treated as basic configuration variables.All constraints and their consistency conditions, as well as the equations of motion,for the system are presented.There is no physical degree of freedom in the system as expected.The Ba\\~nados-Teitelboim-Zanelli spacetime as an example is used to check the analysis.

  10. Analysis of Fission Products on the AGR-1 Capsule Components

    Energy Technology Data Exchange (ETDEWEB)

    Paul A. Demkowicz; Jason M. Harp; Philip L. Winston; Scott A. Ploger

    2013-03-01

    The components of the AGR-1 irradiation capsules were analyzed to determine the retained inventory of fission products in order to determine the extent of in-pile fission product release from the fuel compacts. This includes analysis of (i) the metal capsule components, (ii) the graphite fuel holders, (iii) the graphite spacers, and (iv) the gas exit lines. The fission products most prevalent in the components were Ag-110m, Cs 134, Cs 137, Eu-154, and Sr 90, and the most common location was the metal capsule components and the graphite fuel holders. Gamma scanning of the graphite fuel holders was also performed to determine spatial distribution of Ag-110m and radiocesium. Silver was released from the fuel components in significant fractions. The total Ag-110m inventory found in the capsules ranged from 1.2×10 2 (Capsule 3) to 3.8×10 1 (Capsule 6). Ag-110m was not distributed evenly in the graphite fuel holders, but tended to concentrate at the axial ends of the graphite holders in Capsules 1 and 6 (located at the top and bottom of the test train) and near the axial center in Capsules 2, 3, and 5 (in the center of the test train). The Ag-110m further tended to be concentrated around fuel stacks 1 and 3, the two stacks facing the ATR reactor core and location of higher burnup, neutron fluence, and temperatures compared with Stack 2. Detailed correlation of silver release with fuel type and irradiation temperatures is problematic at the capsule level due to the large range of temperatures experienced by individual fuel compacts in each capsule. A comprehensive Ag 110m mass balance for the capsules was performed using measured inventories of individual compacts and the inventory on the capsule components. For most capsules, the mass balance was within 11% of the predicted inventory. The Ag-110m release from individual compacts often exhibited a very large range within a particular capsule.

  11. Analysis of Fission Products on the AGR-1 Capsule Components

    Energy Technology Data Exchange (ETDEWEB)

    Paul A. Demkowicz; Jason M. Harp; Philip L. Winston; Scott A. Ploger

    2013-03-01

    The components of the AGR-1 irradiation capsules were analyzed to determine the retained inventory of fission products in order to determine the extent of in-pile fission product release from the fuel compacts. This includes analysis of (i) the metal capsule components, (ii) the graphite fuel holders, (iii) the graphite spacers, and (iv) the gas exit lines. The fission products most prevalent in the components were Ag-110m, Cs 134, Cs 137, Eu-154, and Sr 90, and the most common location was the metal capsule components and the graphite fuel holders. Gamma scanning of the graphite fuel holders was also performed to determine spatial distribution of Ag-110m and radiocesium. Silver was released from the fuel components in significant fractions. The total Ag-110m inventory found in the capsules ranged from 1.2×10 2 (Capsule 3) to 3.8×10 1 (Capsule 6). Ag-110m was not distributed evenly in the graphite fuel holders, but tended to concentrate at the axial ends of the graphite holders in Capsules 1 and 6 (located at the top and bottom of the test train) and near the axial center in Capsules 2, 3, and 5 (in the center of the test train). The Ag-110m further tended to be concentrated around fuel stacks 1 and 3, the two stacks facing the ATR reactor core and location of higher burnup, neutron fluence, and temperatures compared with Stack 2. Detailed correlation of silver release with fuel type and irradiation temperatures is problematic at the capsule level due to the large range of temperatures experienced by individual fuel compacts in each capsule. A comprehensive Ag 110m mass balance for the capsules was performed using measured inventories of individual compacts and the inventory on the capsule components. For most capsules, the mass balance was within 11% of the predicted inventory. The Ag-110m release from individual compacts often exhibited a very large range within a particular capsule.

  12. Coordinated analysis and quantification of sedimentary fluxes and budgets in cold environments: The SEDIBUD Programme

    Science.gov (United States)

    Beylich, Achim A.; Lamoureux, Scott F.

    2010-05-01

    Amplified climate change and ecological sensitivity of polar and high-altitude cold environments has been highlighted as a key global environmental issue. Projected climate change in cold climate environments is expected to alter melt season duration and intensity, along with the number of extreme rainfall events, total annual precipitation and the balance between snowfall and rainfall. Similarly, changes to the thermal balance are expected to reduce the extent of permafrost and seasonal ground frost and increase active layer depth. These effects will undoubtedly change surface environments in cold environments and alter fluxes of sediments, nutrients and solutes, but the absense of data and coordinated analysis to understand the sensitivity of the surface environment are acute in cold climate environments. The SEDIBUD (Sediment Budgets in Cold Environments) Programme of the International Association of Geomorphologists (I.A.G./A.I.G.) was formed in 2005 to address this key knowledge gap. SEDIBUD has currently about 400 members worldwide and the Steering Committee of this international programme is composed of ten scientists from nine different countries. The central research question of this global group of scientists is to Assess the contemporary sedimentary fluxes in cold climates, with emphasis on both particulate and dissolved components. Research carried out at currently 38 defined SEDIBUD Key Test Sites varies by programme, logistics and available ressources, but typically represent interdisciplinary collaborations of geomorphologists, hydrologists, ecologists, and permafrost scientists and glaciologists with different levels of detail. SEDIBUD key test sites provide data on annual climate conditions, total runoff and particulate and dissolved fluxes as well as information on other relevant surface processes. A number of selected key test sites are providing high-resolution data on climatic conditions, runoff and fluvial fluxes, which in addition to the

  13. Nonlinear independent component analysis: Existence and uniqueness results.

    Science.gov (United States)

    Hyvärinen, Aapo; Pajunen, Petteri

    1999-04-01

    The question of existence and uniqueness of solutions for nonlinear independent component analysis is addressed. It is shown that if the space of mixing functions is not limited there exists always an infinity of solutions. In particular, it is shown how to construct parameterized families of solutions. The indeterminacies involved are not trivial, as in the linear case. Next, it is shown how to utilize some results of complex analysis to obtain uniqueness of solutions. We show that for two dimensions, the solution is unique up to a rotation, if the mixing function is constrained to be a conformal mapping together with some other assumptions. We also conjecture that the solution is strictly unique except in some degenerate cases, as the indeterminacy implied by the rotation is essentially similar to estimating the model of linear ICA.

  14. Structure Learning by Pruning in Independent Component Analysis

    DEFF Research Database (Denmark)

    Kjems, Andreas; Hansen, Lars Kai

    2006-01-01

    We discuss pruning as a means of structure learning in independent component analysis. Sparse models are attractive in both signal processing and in analysis of abstract data, they can assist model interpretation, generalizability and reduce computation. We derive the relevant saliency expression...... methods, for both small and large samples. The Bayesian information criterion (BIC) seems to outperform both AIC and test sets as tools for determining the optimal degree of sparsity....... and compare with magnitude based pruning and Bayesian sparsification. We show in simulations that pruning is able to identify underlying sparse structures without prior knowledge on the degree of sparsity. We find that for ICA magnitude based pruning is as efficient as saliency based methods and Bayesian...

  15. Structure learning by pruning in independent component analysis

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai

    2008-01-01

    We discuss pruning as a means of structure learning in independent component analysis (ICA). Learning the structure is attractive in both signal processing and in analysis of abstract data, where it can assist model interpretation, generalizability and reduce computation. We derive the relevant s...... based methods and Bayesian methods, for both small and large samples. The Bayesian information criterion (BIC) seems to outperform both AIC and test sets as tools for determining the optimal dimensionality....... saliency expressions and compare with magnitude based pruning and Bayesian sparsification. We show in simulations that pruning is able to identify underlying structures without prior knowledge on the dimensionality of the model. We find, that for ICA, magnitude based pruning is as efficient as saliency...

  16. Principal Component Analysis on Semen Quality among Chinese Young Men

    Institute of Scientific and Technical Information of China (English)

    Jun-qing WU; Er-sheng GAO; Jian-guo TAO; Cui-ling LIANG; Wenying LI; Qiu-ying YANG; Kang-shou YAO; Wei-qun LU; Lu CHEN

    2003-01-01

    Objective To understand the current semen quality status among Chinese young men and influential factors in China and to explore its evaluation index. Methods A total of 562 healthy male volunteers were recruited during their premarital examinations in seven provincials and municipal regions' MCH centers; descriptive and principal component analyses were used to analyze data.Results The findings show that semen volume (2.61±1.10 mL), sperm density (64.47±34.59×106/mL), percentage of sperm forward progression (59.89%±17.11%), percentage of sperm viability (77.19%±11.87%), and percentage of normal sperm morphology (78.23%±9.15%). The first principal component function is Z1=-8.512 54 + 0.001 36X1' +0.031 92X2'+0.043 52X3'+ 0.039 84X4', which is closely related to percentage of sperm viability (X3), percentage of sperm forward progression (X2), and percentage of normal sperm morphology (X4);The second principal component function is: Z2=0.491 92+ 0.080 80X1- 0.000 58X2 - 0.005 10X3 - 0.018 07X4, which depends on the total sperm count (X1). Conclusion Only 42.3% subjects meet all the common WHO standard of semen quality. The multiple analysis of Z1 showed that the highest Z1 are among subjects from Guizhou, workers, or town residents. Multiple analysis of Z2 showed that the older age when the subjects had the first sexual impulse, the longer period of sexual abstinence and more quantity of sperm they had; the more sexual activity subjects had, the less amount of sperm they had.

  17. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Forni, Olivier, E-mail: olivier.forni@irap.omp.eu [Université de Toulouse, UPS-OMP, Institut de Recherche en Astrophysiqe et Planétologie, Toulouse (France); CNRS, IRAP, 9, av. Colonel Roche, BP 44346, F-31028 Cedex 4, Toulouse (France); Maurice, Sylvestre, E-mail: sylvestre.maurice@irap.omp.eu [Université de Toulouse, UPS-OMP, Institut de Recherche en Astrophysiqe et Planétologie, Toulouse (France); CNRS, IRAP, 9, av. Colonel Roche, BP 44346, F-31028 Cedex 4, Toulouse (France); Gasnault, Olivier, E-mail: olivier.gasnault@irap.omp.eu [Université de Toulouse, UPS-OMP, Institut de Recherche en Astrophysiqe et Planétologie, Toulouse (France); CNRS, IRAP, 9, av. Colonel Roche, BP 44346, F-31028 Cedex 4, Toulouse (France); Wiens, Roger C., E-mail: rwiens@lanl.gov [Space Remote Sensing, Los Alamos National Laboratory, Los Alamos, NM 87544 (United States); Cousin, Agnès, E-mail: acousin@lanl.gov [Université de Toulouse, UPS-OMP, Institut de Recherche en Astrophysiqe et Planétologie, Toulouse (France); CNRS, IRAP, 9, av. Colonel Roche, BP 44346, F-31028 Cedex 4, Toulouse (France); Chemical Division, Los Alamos National Laboratory, Los Alamos, NM 87544 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Chemical Division, Los Alamos National Laboratory, Los Alamos, NM 87544 (United States); Sirven, Jean-Baptiste, E-mail: jean-baptiste.sirven@cea.f [CEA Saclay, DEN/DPC/SCP, 91191 Cedex, Gif sur Yvette (France); Lasue, Jérémie, E-mail: jeremie.lasue@irap.omp.eu [Université de Toulouse, UPS-OMP, Institut de Recherche en Astrophysiqe et Planétologie, Toulouse (France); CNRS, IRAP, 9, av. Colonel Roche, BP 44346, F-31028 Cedex 4, Toulouse (France)

    2013-08-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes.

  18. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2016-12-22

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  19. Tensorial Kernel Principal Component Analysis for Action Recognition

    Directory of Open Access Journals (Sweden)

    Cong Liu

    2013-01-01

    Full Text Available We propose the Tensorial Kernel Principal Component Analysis (TKPCA for dimensionality reduction and feature extraction from tensor objects, which extends the conventional Principal Component Analysis (PCA in two perspectives: working directly with multidimensional data (tensors in their native state and generalizing an existing linear technique to its nonlinear version by applying the kernel trick. Our method aims to remedy the shortcomings of multilinear subspace learning (tensorial PCA developed recently in modelling the nonlinear manifold of tensor objects and brings together the desirable properties of kernel methods and tensor decompositions for significant performance gain when the data are multidimensional and nonlinear dependencies do exist. Our approach begins by formulating TKPCA as an optimization problem. Then, we develop a kernel function based on Grassmann Manifold that can directly take tensorial representation as parameters instead of traditional vectorized representation. Furthermore, a TKPCA-based tensor object recognition is also proposed for application of the action recognition. Experiments with real action datasets show that the proposed method is insensitive to both noise and occlusion and performs well compared with state-of-the-art algorithms.

  20. A Reconfigurable FPGA System for Parallel Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Du Hongtao

    2006-01-01

    Full Text Available A run-time reconfigurable field programmable gate array (FPGA system is presented for the implementation of the parallel independent component analysis (ICA algorithm. In this work, we investigate design challenges caused by the capacity constraints of single FPGA. Using the reconfigurability of FPGA, we show how to manipulate the FPGA-based system and execute processes for the parallel ICA (pICA algorithm. During the implementation procedure, pICA is first partitioned into three temporally independent function blocks, each of which is synthesized by using several ICA-related reconfigurable components (RCs that are developed for reuse and retargeting purposes. All blocks are then integrated into a design and development environment for performing tasks such as FPGA optimization, placement, and routing. With partitioning and reconfiguration, the proposed reconfigurable FPGA system overcomes the capacity constraints for the pICA implementation on embedded systems. We demonstrate the effectiveness of this implementation on real images with large throughput for dimensionality reduction in hyperspectral image (HSI analysis.

  1. Principal Component Analysis studies of turbulence in optically thick gas

    CERN Document Server

    Correia, Caio; Burkhart, Blakesley; Pogosyan, Dmitri; De Medeiros, José Renan

    2015-01-01

    In this work we investigate the Principal Component Analysis (PCA) sensitivity to the velocity power spectrum in high opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic Position-Position-Velocity (PPV) cubes of fractional Brownian motion (fBm) and magnetohydrodynamics (MHD) simulations, post processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal -3 spectrum in accordance with the predictions of Lazarian \\& Pogosyan (2004) theory. This makes PCA potentially a valuable tool for studies of turbulence at high opacities provided that the proper gauging of the PCA index is made. The later, however, we found t...

  2. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    Energy Technology Data Exchange (ETDEWEB)

    Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  3. Enhancing backbone sampling in Monte Carlo simulations using internal coordinates normal mode analysis.

    Science.gov (United States)

    Gil, Victor A; Lecina, Daniel; Grebner, Christoph; Guallar, Victor

    2016-10-15

    Normal mode methods are becoming a popular alternative to sample the conformational landscape of proteins. In this study, we describe the implementation of an internal coordinate normal mode analysis method and its application in exploring protein flexibility by using the Monte Carlo method PELE. This new method alternates two different stages, a perturbation of the backbone through the application of torsional normal modes, and a resampling of the side chains. We have evaluated the new approach using two test systems, ubiquitin and c-Src kinase, and the differences to the original ANM method are assessed by comparing both results to reference molecular dynamics simulations. The results suggest that the sampled phase space in the internal coordinate approach is closer to the molecular dynamics phase space than the one coming from a Cartesian coordinate anisotropic network model. In addition, the new method shows a great speedup (∼5-7×), making it a good candidate for future normal mode implementations in Monte Carlo methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Analysis of the structural consensus of the zinc coordination centers of metalloprotein structures.

    Science.gov (United States)

    Patel, Kirti; Kumar, Anil; Durani, Susheel

    2007-10-01

    In a recent sequence-analysis study it was concluded that up to 10% of the human proteome could be comprised of zinc proteins, quite varied in the functional spread. The native structures of only few of the proteins are actually established. The elucidation of rest of the sequences of not just human but even other actively investigated genomes may benefit from knowledge of the structural consensus of the zinc-binding centers of the currently known zinc proteins. Nearly four hundred X-ray and NMR structures in the database of zinc-protein structures available as of April 2007 were investigated for geometry and conformation in the zinc-binding centers; separately for the structural and catalytic proteins and individually in the zinc centers coordinated to three and four amino-acid ligands. Enhanced cysteine involvement in agreement with the observation in human proteome has been detected in contrast with previous reports. Deviations from ideal coordination geometries are detected, possible underlying reasons are investigated, and correlations of geometry and conformation in zinc-coordination centers with protein function are established, providing possible benchmarks for putative zinc-binding patterns of the burgeoning genome data.

  5. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  6. Detecting coordinated regulation of multi-protein complexes using logic analysis of gene expression

    Directory of Open Access Journals (Sweden)

    Yeates Todd O

    2009-12-01

    Full Text Available Abstract Background Many of the functional units in cells are multi-protein complexes such as RNA polymerase, the ribosome, and the proteasome. For such units to work together, one might expect a high level of regulation to enable co-appearance or repression of sets of complexes at the required time. However, this type of coordinated regulation between whole complexes is difficult to detect by existing methods for analyzing mRNA co-expression. We propose a new methodology that is able to detect such higher order relationships. Results We detect coordinated regulation of multiple protein complexes using logic analysis of gene expression data. Specifically, we identify gene triplets composed of genes whose expression profiles are found to be related by various types of logic functions. In order to focus on complexes, we associate the members of a gene triplet with the distinct protein complexes to which they belong. In this way, we identify complexes related by specific kinds of regulatory relationships. For example, we may find that the transcription of complex C is increased only if the transcription of both complex A AND complex B is repressed. We identify hundreds of examples of coordinated regulation among complexes under various stress conditions. Many of these examples involve the ribosome. Some of our examples have been previously identified in the literature, while others are novel. One notable example is the relationship between the transcription of the ribosome, RNA polymerase and mannosyltransferase II, which is involved in N-linked glycan processing in the Golgi. Conclusions The analysis proposed here focuses on relationships among triplets of genes that are not evident when genes are examined in a pairwise fashion as in typical clustering methods. By grouping gene triplets, we are able to decipher coordinated regulation among sets of three complexes. Moreover, using all triplets that involve coordinated regulation with the ribosome

  7. OLED Defect Inspection System Development through Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Xin Chen

    2012-12-01

    Full Text Available Organic Light Emitting Displays (OLED is a new type of display device which has become increasingly attractive and popular. Due to the complex manufacturing process, various defects may exist on the OLED panel affecting the quality and life of the display panels. These defects have the characteristics of fuzzy boundaries, irregular in shape, low contrast with background and especially, they are mixed with the pixel texture background increasing the difficulty of a rapid recognition. In this paper, we proposed an approach to detect the defects based on the model of independent component analysis (ICA. The ICA model is applied to a perfect OLED image to estimate its corresponding independent components (ICs and create the de-mixing matrix. Through estimation and determination of a proper ICi row vector of the faultless image, a new de-mixing matrix can be generated which constitutes only uniform information and is then applied to reconstruct the texture background of source OLED images. Through the operation of subtraction of the reconstructed background from the source images and the binary segmentation, the defects can be detected. Based on the algorithms, a defect detection system for the OLED panels was implemented and the testing work was performed in this study. The testing results show that the proposed method is feasible and effective.

  8. Authentication Scheme Based on Principal Component Analysis for Satellite Images

    Directory of Open Access Journals (Sweden)

    Ashraf. K. Helmy

    2009-09-01

    Full Text Available This paper presents a multi-band wavelet image content authentication scheme for satellite images by incorporating the principal component analysis (PCA. The proposed schemeachieves higher perceptual transparency and stronger robustness. Specifically, the developed watermarking scheme can successfully resist common signal processing such as JPEG compression and geometric distortions such as cropping. In addition, the proposed scheme can be parameterized, thus resulting in more security. That is, an attacker may not be able to extract the embedded watermark if the attacker does not know the parameter.In an order to meet these requirements, the host image is transformed to YIQ to decrease the correlation between different bands, Then Multi-band Wavelet transform (M-WT is applied to each channel separately obtaining one approximate sub band and fifteen detail sub bands. PCA is then applied to the coefficients corresponding to the same spatial location in all detail sub bands. The last principle component band represents an excellent domain forinserting the water mark since it represents lowest correlated features in high frequency area of host image.One of the most important aspects of satellite images is spectral signature, the behavior of different features in different spectral bands, the results of proposed algorithm shows that the spectral stamp for different features doesn't tainted after inserting the watermark.

  9. Principal Component Analysis for pattern recognition in volcano seismic spectra

    Science.gov (United States)

    Unglert, Katharina; Jellinek, A. Mark

    2016-04-01

    Variations in the spectral content of volcano seismicity can relate to changes in volcanic activity. Low-frequency seismic signals often precede or accompany volcanic eruptions. However, they are commonly manually identified in spectra or spectrograms, and their definition in spectral space differs from one volcanic setting to the next. Increasingly long time series of monitoring data at volcano observatories require automated tools to facilitate rapid processing and aid with pattern identification related to impending eruptions. Furthermore, knowledge transfer between volcanic settings is difficult if the methods to identify and analyze the characteristics of seismic signals differ. To address these challenges we have developed a pattern recognition technique based on a combination of Principal Component Analysis and hierarchical clustering applied to volcano seismic spectra. This technique can be used to characterize the dominant spectral components of volcano seismicity without the need for any a priori knowledge of different signal classes. Preliminary results from applying our method to volcanic tremor from a range of volcanoes including K¯ı lauea, Okmok, Pavlof, and Redoubt suggest that spectral patterns from K¯ı lauea and Okmok are similar, whereas at Pavlof and Redoubt spectra have their own, distinct patterns.

  10. Demixed principal component analysis of neural population data

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-01-01

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378

  11. Analysis of tangible and intangible hotel service quality components

    Directory of Open Access Journals (Sweden)

    Marić Dražen

    2016-01-01

    Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.

  12. Derivation of Boundary Manikins: A Principal Component Analysis

    Science.gov (United States)

    Young, Karen; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar

    2008-01-01

    When designing any human-system interface, it is critical to provide realistic anthropometry to properly represent how a person fits within a given space. This study aimed to identify a minimum number of boundary manikins or representative models of subjects anthropometry from a target population, which would realistically represent the population. The boundary manikin anthropometry was derived using, Principal Component Analysis (PCA). PCA is a statistical approach to reduce a multi-dimensional dataset using eigenvectors and eigenvalues. The measurements used in the PCA were identified as those measurements critical for suit and cockpit design. The PCA yielded a total of 26 manikins per gender, as well as their anthropometry from the target population. Reduction techniques were implemented to reduce this number further with a final result of 20 female and 22 male subjects. The anthropometry of the boundary manikins was then be used to create 3D digital models (to be discussed in subsequent papers) intended for use by designers to test components of their space suit design, to verify that the requirements specified in the Human Systems Integration Requirements (HSIR) document are met. The end-goal is to allow for designers to generate suits which accommodate the diverse anthropometry of the user population.

  13. Direct Numerical Simulation of Combustion Using Principal Component Analysis

    Science.gov (United States)

    Owoyele, Opeoluwa; Echekki, Tarek

    2016-11-01

    We investigate the potential of accelerating chemistry integration during the direct numerical simulation (DNS) of complex fuels based on the transport equations of representative scalars that span the desired composition space using principal component analysis (PCA). The transported principal components (PCs) offer significant potential to reduce the computational cost of DNS through a reduction in the number of transported scalars, as well as the spatial and temporal resolution requirements. The strategy is demonstrated using DNS of a premixed methane-air flame in a 2D vortical flow and is extended to the 3D geometry to further demonstrate the computational efficiency of PC transport. The PCs are derived from a priori PCA of a subset of the full thermo-chemical scalars' vector. The PCs' chemical source terms and transport properties are constructed and tabulated in terms of the PCs using artificial neural networks (ANN). Comparison of DNS based on a full thermo-chemical state and DNS based on PC transport based on 6 PCs shows excellent agreement even for species that are not included in the PCA reduction. The transported PCs reproduce some of the salient features of strongly curved and strongly strained flames. The 2D DNS results also show a significant reduction of two orders of magnitude in the computational cost of the simulations, which enables an extension of the PCA approach to 3D DNS under similar computational requirements. This work was supported by the National Science Foundation Grant DMS-1217200.

  14. Acceleration of dynamic fluorescence molecular tomography with principal component analysis.

    Science.gov (United States)

    Zhang, Guanglei; He, Wei; Pu, Huangsheng; Liu, Fei; Chen, Maomao; Bai, Jing; Luo, Jianwen

    2015-06-01

    Dynamic fluorescence molecular tomography (FMT) is an attractive imaging technique for three-dimensionally resolving the metabolic process of fluorescent biomarkers in small animal. When combined with compartmental modeling, dynamic FMT can be used to obtain parametric images which can provide quantitative pharmacokinetic information for drug development and metabolic research. However, the computational burden of dynamic FMT is extremely huge due to its large data sets arising from the long measurement process and the densely sampling device. In this work, we propose to accelerate the reconstruction process of dynamic FMT based on principal component analysis (PCA). Taking advantage of the compression property of PCA, the dimension of the sub weight matrix used for solving the inverse problem is reduced by retaining only a few principal components which can retain most of the effective information of the sub weight matrix. Therefore, the reconstruction process of dynamic FMT can be accelerated by solving the smaller scale inverse problem. Numerical simulation and mouse experiment are performed to validate the performance of the proposed method. Results show that the proposed method can greatly accelerate the reconstruction of parametric images in dynamic FMT almost without degradation in image quality.

  15. Sensitivity analysis on an AC600 aluminum skin component

    Science.gov (United States)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  16. Analysis of Performance of Jet Engine from Characteristics of Components II : Interaction of Components as Determined from Engine Operation

    Science.gov (United States)

    Goldstein, Arthur W; Alpert, Sumner; Beede, William; Kovach, Karl

    1949-01-01

    In order to understand the operation and the interaction of jet-engine components during engine operation and to determine how component characteristics may be used to compute engine performance, a method to analyze and to estimate performance of such engines was devised and applied to the study of the characteristics of a research turbojet engine built for this investigation. An attempt was made to correlate turbine performance obtained from engine experiments with that obtained by the simpler procedure of separately calibrating the turbine with cold air as a driving fluid in order to investigate the applicability of component calibration. The system of analysis was also applied to prediction of the engine and component performance with assumed modifications of the burner and bearing characteristics, to prediction of component and engine operation during engine acceleration, and to estimates of the performance of the engine and the components when the exhaust gas was used to drive a power turbine.

  17. Blind extraction of an exoplanetary spectrum through Independent Component Analysis

    CERN Document Server

    Waldmann, Ingo P; Deroo, Pieter; Hollis, Morgan D J; Yurchenko, Sergey N; Tennyson, Jonathan

    2013-01-01

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The de-trending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior, nor auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10 - 30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of ~0.09 microns. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in the light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of th...

  18. BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.; Yurchenko, S. N.; Tennyson, J. [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT (United Kingdom); Deroo, P., E-mail: ingo@star.ucl.ac.uk [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States)

    2013-03-20

    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.

  19. Analysis of European Union Economy in Terms of GDP Components

    Directory of Open Access Journals (Sweden)

    Simona VINEREAN

    2013-12-01

    Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.

  20. Independent components in spectroscopic analysis of complex mixtures

    CERN Document Server

    Monakhova, Yulia B; Kraskov, Alexander; Mushtakova, Svetlana P; 10.1016/j.chemolab.2010.05.023

    2010-01-01

    We applied two methods of "blind" spectral decomposition (MILCA and SNICA) to quantitative and qualitative analysis of UV absorption spectra of several non-trivial mixture types. Both methods use the concept of statistical independence and aim at the reconstruction of minimally dependent components from a linear mixture. We examined mixtures of major ecotoxicants (aromatic and polyaromatic hydrocarbons), amino acids and complex mixtures of vitamins in a veterinary drug. Both MICLA and SNICA were able to recover concentrations and individual spectra with minimal errors comparable with instrumental noise. In most cases their performance was similar to or better than that of other chemometric methods such as MCR-ALS, SIMPLISMA, RADICAL, JADE and FastICA. These results suggest that the ICA methods used in this study are suitable for real life applications.

  1. Nonlinear fault diagnosis method based on kernel principal component analysis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu; Zhang Chunkai; Shao Huihe

    2005-01-01

    To ensure the system run under working order, detection and diagnosis of faults play an important role in industrial process. This paper proposed a nonlinear fault diagnosis method based on kernel principal component analysis (KPCA). In proposed method, using essential information of nonlinear system extracted by KPCA, we constructed KPCA model of nonlinear system under normal working condition. Then new data were projected onto the KPCA model. When new data are incompatible with the KPCA model, it can be concluded that the nonlinear system isout of normal working condition. Proposed method was applied to fault diagnosison rolling bearings. Simulation results show proposed method provides an effective method for fault detection and diagnosis of nonlinear system.

  2. Using independent component analysis for material estimation in hyperspectral images.

    Science.gov (United States)

    Kuan, Chia-Yun; Healey, Glenn

    2004-06-01

    We develop a method for automated material estimation in hyperspectral images. The method models a hyperspectral pixel as a linear mixture of unknown materials. The method is particularly useful for applications in which material regions in a scene are smaller than one pixel. In contrast to many material estimation methods, the new method uses the statistics of large numbers of pixels rather than attempting to identify a small number of the purest pixels. The method is based on maximizing the independence of material abundances at each pixel. We show how independent component analysis algorithms can be adapted for use with this problem. We demonstrate properties of the method by application to airborne hyperspectral data.

  3. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  4. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  5. Principal Component Analysis with Contaminated Data: The High Dimensional Case

    CERN Document Server

    Xu, Huan; Mannor, Shie

    2010-01-01

    We consider the dimensionality-reduction problem (finding a subspace approximation of observed data) for contaminated data in the high dimensional regime, where the number of observations is of the same magnitude as the number of variables of each observation, and the data set contains some (arbitrarily) corrupted observations. We propose a High-dimensional Robust Principal Component Analysis (HR-PCA) algorithm that is tractable, robust to contaminated points, and easily kernelizable. The resulting subspace has a bounded deviation from the desired one, achieves maximal robustness -- a breakdown point of 50% while all existing algorithms have a breakdown point of zero, and unlike ordinary PCA algorithms, achieves optimality in the limit case where the proportion of corrupted points goes to zero.

  6. Method of Real-Time Principal-Component Analysis

    Science.gov (United States)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  7. Thermal Viscoelastic Analysis of Plastic Components Considering Residual Stress

    Science.gov (United States)

    Choi, Chel Woo; Jeoung, Kab Sik; Moon, Hyung-Il; Kim, Heon Young

    Plastics is commonly used in consumer electronics because of it is high strength per unit mass and good productivity, but plastic components may often become distorted after injection molding due to residual stress after the filling, packing, and cooling processes. In addition, plastic deteriorates depending on various temperature conditions and the operating time, which can be characterized by stress relaxation and creep. The viscoelastic behavior of plastic materials in the time domain can be expressed by the Prony series using the ABAQUS commercial software package. This paper suggests a process for predicting post-production deformation under cyclic thermal loading. The process was applied to real plastic panels, and the deformation predicted by the analysis was compared to that measured in actual testing, showing the possibility of using this process for predicting the post-production deformation of plastic products under thermal loading.

  8. Suppressing Background Radiation Using Poisson Principal Component Analysis

    CERN Document Server

    Tandon, P; Dubrawski, A; Labov, S; Nelson, K

    2016-01-01

    Performance of nuclear threat detection systems based on gamma-ray spectrometry often strongly depends on the ability to identify the part of measured signal that can be attributed to background radiation. We have successfully applied a method based on Principal Component Analysis (PCA) to obtain a compact null-space model of background spectra using PCA projection residuals to derive a source detection score. We have shown the method's utility in a threat detection system using mobile spectrometers in urban scenes (Tandon et al 2012). While it is commonly assumed that measured photon counts follow a Poisson process, standard PCA makes a Gaussian assumption about the data distribution, which may be a poor approximation when photon counts are low. This paper studies whether and in what conditions PCA with a Poisson-based loss function (Poisson PCA) can outperform standard Gaussian PCA in modeling background radiation to enable more sensitive and specific nuclear threat detection.

  9. Circuit design of VLSI for microelectronic coordinate-sensitive detector for material element analysis

    Directory of Open Access Journals (Sweden)

    Sidorenko V. P.

    2012-08-01

    Full Text Available There has been designed, manufactured and tested a VLSI providing as a part of the microelectronic coordinate-sensitive detector the simultaneous elemental analysis of all the principles of the substance. VLSI ensures the amplifier-converter response on receiving of 1,6.10–13 С negative charge to its input. Response speed of the microcircuit is at least 3 MHz in the counting mode and more than 4 MHz in the counter information read-out mode. The power consumption of the microcircuit is no more than 7 mA.

  10. Size distribution measurements and chemical analysis of aerosol components

    Energy Technology Data Exchange (ETDEWEB)

    Pakkanen, T.A.

    1995-12-31

    The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted

  11. An in-depth analysis of theoretical frameworks for the study of care coordination1

    OpenAIRE

    Van Houdt, Sabine; Heyrman, Jan; Vanhaecht, Kris; Sermeus, Walter; De Lepeleire, Jan

    2013-01-01

    Introduction Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frame...

  12. An in-depth analysis of theoretical frameworks for the study of care coordination

    OpenAIRE

    Sabine Van Houdt; Jan Heyrman; Kris Vanhaecht; Walter Sermeus; Jan De Lepeleire

    2013-01-01

    Introduction: Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical fram...

  13. Costs of coordinated versus uncoordinated care in Germany: results of a routine data analysis in Bavaria

    Science.gov (United States)

    Schneider, Antonius; Donnachie, Ewan; Tauscher, Martin; Gerlach, Roman; Maier, Werner; Mielck, Andreas; Linde, Klaus; Mehring, Michael

    2016-01-01

    Objectives The efficiency of a gatekeeping system for a health system, as in Germany, remains unclear particularly as access to specialist ambulatory care is not restricted. The aim was to compare the costs of coordinated versus uncoordinated patients (UP) in ambulatory care; with additional subgroup analysis of patients with mental disorders. Design Retrospective routine data analysis of patients with statutory health insurance, using claims data held by the Bavarian Association of Statutory Health Insurance Physicians. A patient was defined as uncoordinated if he or she visited at least 1 specialist without a referral from a general practitioner within a quarter. Outcomes were compared with propensity score matching analysis. Participants The study encompassed all statutorily insured patients in Bavaria contacting at least 1 ambulatory specialist in the first quarter of 2011 (n=3 616 510). Primary and secondary outcome measures Primary outcome was total costs of ambulatory care; secondary outcomes were financial claims of general physicians, specialists and for medication. Results The average age was 55.3 years for coordinated patients (CP, n=1 629 302), 48.3 years for UP (n=1 825 840). CP more frequently had chronic diseases (85.4%) as compared with UP (67.5%). The total unadjusted financial claim per patient was higher for UP (€234.52) than for CP (€224.41); the total adjusted difference was −€9.65 (95% CI −11.64 to −7.67), indicating lower costs for CP. The cost differences increased with increasing age. Total adjusted difference per patient with mental diseases as documented with an International Classification of Diseases (ICD)-10 F-diagnosis, was −€20.31 (95% CI −26.43 to −14.46). Conclusions Coordination of care is associated with lower ambulatory healthcare expenditures and is of particular importance for patients who are more vulnerable to medical interventions, especially for elderly and patients with mental disorders

  14. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    OpenAIRE

    Yung-Kun Chuang; I-Chang Yang; Yangming Martin Lo; Chao-Yin Tsai; Suming Chen

    2014-01-01

    Independent component (IC) analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D) and 3D] constructed by the ICs. This setup could be used for quantitative determ...

  15. Independent vector analysis for capturing common components in fMRI group analysis

    DEFF Research Database (Denmark)

    Engberg, Astrid M. E.; Andersen, Kasper W.; Mørup, Morten;

    2016-01-01

    -subject studies. Independent vector analysis (IVA) is a promising alternative approach to perform group fMRI analysis, which has been shown to better capture components with high inter-subject variability. The most widely applied IVA method is based on the multivariate Laplace distribution (IVA-GL), which assumes...... independence within subject components coupled across subjects only through shared scaling. In this study, we propose a more natural formulation of IVA based on a Normal-Inverse-Gamma distribution (IVA-NIG), in which the components can be directly interpreted as realizations of a common mean component...... with individual subject variability. We evaluate the performance of IVA-NIG compared to IVA-GL and similar decomposition methods, through the application of two types of simulated data and on real task fMRI data. The results show that IVA-NIG offers superior detection of components in simulated fMRI data. On real...

  16. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    Science.gov (United States)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  17. A Principal Component Analysis of the Diffuse Interstellar Bands

    Science.gov (United States)

    Ensor, T.; Cami, J.; Bhatt, N. H.; Soddu, A.

    2017-02-01

    We present a principal component (PC) analysis of 23 line-of-sight parameters (including the strengths of 16 diffuse interstellar bands, DIBs) for a well-chosen sample of single-cloud sightlines representing a broad range of environmental conditions. Our analysis indicates that the majority (˜93%) of the variations in the measurements can be captured by only four parameters The main driver (i.e., the first PC) is the amount of DIB-producing material in the line of sight, a quantity that is extremely well traced by the equivalent width of the λ5797 DIB. The second PC is the amount of UV radiation, which correlates well with the λ5797/λ5780 DIB strength ratio. The remaining two PCs are more difficult to interpret, but are likely related to the properties of dust in the line of sight (e.g., the gas-to-dust ratio). With our PCA results, the DIBs can then be used to estimate these line-of-sight parameters.

  18. A principal component analysis of 39 scientific impact measures

    CERN Document Server

    Bollen, Johan; Hagberg, Aric; Chute, Ryan

    2009-01-01

    The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  19. Principal components analysis in the space of phylogenetic trees

    CERN Document Server

    Nye, Tom M W

    2012-01-01

    Phylogenetic analysis of DNA or other data commonly gives rise to a collection or sample of inferred evolutionary trees. Principal Components Analysis (PCA) cannot be applied directly to collections of trees since the space of evolutionary trees on a fixed set of taxa is not a vector space. This paper describes a novel geometrical approach to PCA in tree-space that constructs the first principal path in an analogous way to standard linear Euclidean PCA. Given a data set of phylogenetic trees, a geodesic principal path is sought that maximizes the variance of the data under a form of projection onto the path. Due to the high dimensionality of tree-space and the nonlinear nature of this problem, the computational complexity is potentially very high, so approximate optimization algorithms are used to search for the optimal path. Principal paths identified in this way reveal and quantify the main sources of variation in the original collection of trees in terms of both topology and branch lengths. The approach is...

  20. Analysis of the multiple system with CP component phi Draconis

    CERN Document Server

    Liska, J

    2016-01-01

    The star phi Dra comprises a spectroscopic binary and a third star that together form a visual triple system. It is one of the brightest chemically peculiar (CP) stars of the upper main sequence. Despite these facts, no comprehensive study of its multiplicity has been performed yet. In this work, we present a detailed analysis of the triple system based on available measurements. We use radial velocities taken from four sources in the literature in a re-analysis of the inner spectroscopic binary (Aab). An incorrect value of the orbital period of the inner system Aab about 27 days was accepted in literature more than forty years. A new solution of orbit with the 128-day period was determined. Relative position measurements of the outer visual binary system (AB) from Washington Double Star Catalog were compared with known orbital models. Furthermore, it was shown that astrometric motion in system AB is well described by the model of Andrade (2005) with a 308-year orbital period. Parameters of A and B components...

  1. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  2. A principal component analysis of 39 scientific impact measures.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  3. Spatial control of groundwater contamination, using principal component analysis

    Indian Academy of Sciences (India)

    N Subba Rao

    2014-06-01

    A study on the geochemistry of groundwater was carried out in a river basin of Andhra Pradesh to probe into the spatial controlling processes of groundwater contamination, using principal component analysis (PCA). The PCA transforms the chemical variables, pH, EC, Ca2+, Mg2+, Na+, K+, HCO$^{−}_{3}$, Cl−, SO$^{2−}_{4}$, NO$^{−}_{3}$ and F−, into two orthogonal principal components (PC1 and PC2), accounting for 75% of the total variance of the data matrix. PC1 has high positive loadings of EC, Na+, Cl−, SO$^{2−}_{4}$, Mg2+ and Ca2+, representing a salinity controlled process of geogenic (mineral dissolution, ion exchange, and evaporation), anthropogenic (agricultural activities and domestic wastewaters), and marine (marine clay) origin. The PC2 loadings are highly positive for HCO$^{−}_{3}$, F−, pH and NO$^{−}_{3}$, attributing to the alkalinity and pollution controlled processes of geogenic and anthropogenic origins. The PC scores reflect the change of groundwater quality of geogenic origin from upstream to downstream area with an increase in concentration of chemical variables, which is due to anthropogenic and marine origins with varying topography, soil type, depth of water levels, and water usage. Thus, the groundwater quality shows a variation of chemical facies from Na+ > Ca2+ > Mg2+ > K+: HCO$^{−}_{3}$ > Cl− > SO$^{2−}_{4}$ > NO$^{−}_{3}$ > F− at high topography to Na+ > Mg2+ > Ca2+ > K+: Cl− > HCO$^{−}_{3}$ > SO$^{2−}_{4}$ > NO$^{−}_{3}$ > F− at low topography. With PCA, an effective tool for the spatial controlling processes of groundwater contamination, a subset of explored wells is indexed for continuous monitoring to optimize the expensive effort.

  4. Experimental Testing for Stability Analysis of Distributed Energy Resources Components with Storage Devices and Loads

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Groza, Voicu; Isleifsson, Fridrik Rafn

    2012-01-01

    Experimental Testing for Stability Analysis of Distributed Energy Resources Components with Storage Devices and Loads......Experimental Testing for Stability Analysis of Distributed Energy Resources Components with Storage Devices and Loads...

  5. Application of time series analysis on molecular dynamics simulations of proteins: A study of different conformational spaces by principal component analysis

    Science.gov (United States)

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C.

    2004-09-01

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of α-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Cα coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of α-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of α-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins.

  6. Partnership effectiveness in primary community care networks: A national empirical analysis of partners' coordination infrastructure designs.

    Science.gov (United States)

    Lin, Blossom Yen-Ju; Lin, Yung-Kai; Lin, Cheng-Chieh

    2010-01-01

    Previous empirical and managerial studies have ignored the effectiveness of integrated health networks. It has been argued that the varying definitions and strategic imperatives of integrated organizations may have complicated the assessment of the outcomes/performance of varying models, particularly when their market structures and contexts differed. This study aimed to empirically verify a theoretical perspective on the coordination infrastructure designs and the effectiveness of the primary community care networks (PCCNs) formed and funded by the Bureau of National Health Insurance since March 2003. The PCCNs present a model to replace the traditional fragmented providers in Taiwan's health care. The study used a cross-sectional mailed survey designed to ascertain partnership coordination infrastructure and integration of governance, clinical care, bonding, finances, and information. The outcome indicators were PCCNs' perceived performance and willingness to remain within the network. Structural equation modeling examined the causal relationships, controlling for organizational and environmental factors. Primary data collection occurred from February through December 2005, via structured questionnaires sent to 172 PCCNs. Using the individual PCCN as the unit of analysis, the results found that a network's efforts regarding coordination infrastructures were positively related to the PCCN's perceived performance and willingness to remain within the network. In addition, PCCNs practicing in rural areas and in areas with higher density of medical resources had better perceived effectiveness and willingness to cooperate in the network.Practical Implication: The lack of both an operational definition and an information about system-wide integration may have obstructed understanding of integrated health networks' organizational dynamics. This study empirically examined individual PCCNs and offers new insights on how to improve networks' organizational design and

  7. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  8. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  9. Comparison of analytical eddy current models using principal components analysis

    Science.gov (United States)

    Contant, S.; Luloff, M.; Morelli, J.; Krause, T. W.

    2017-02-01

    Monitoring the gap between the pressure tube (PT) and the calandria tube (CT) in CANDU® fuel channels is essential, as contact between the two tubes can lead to delayed hydride cracking of the pressure tube. Multifrequency transmit-receive eddy current non-destructive evaluation is used to determine this gap, as this method has different depths of penetration and variable sensitivity to noise, unlike single frequency eddy current non-destructive evaluation. An Analytical model based on the Dodd and Deeds solutions, and a second model that accounts for normal and lossy self-inductances, and a non-coaxial pickup coil, are examined for representing the response of an eddy current transmit-receive probe when considering factors that affect the gap response, such as pressure tube wall thickness and pressure tube resistivity. The multifrequency model data was analyzed using principal components analysis (PCA), a statistical method used to reduce the data set into a data set of fewer variables. The results of the PCA of the analytical models were then compared to PCA performed on a previously obtained experimental data set. The models gave similar results under variable PT wall thickness conditions, but the non-coaxial coil model, which accounts for self-inductive losses, performed significantly better than the Dodd and Deeds model under variable resistivity conditions.

  10. Significance-linked connected component analysis for wavelet image coding.

    Science.gov (United States)

    Chai, B B; Vass, J; Zhuang, X

    1999-01-01

    Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.

  11. Transfer Learning via Multi-View Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    Yang-Sheng Ji; Jia-Jun Chen; Gang Niu; Lin Shang; Xin-Yu Dai

    2011-01-01

    Transfer learning aims at leveraging the knowledge in labeled source domains to predict the unlabeled data in a target domain, where the distributions are different in domains. Among various methods for transfer learning, one kind of algorithms focus on the correspondence between bridge features and all the other specific features from different domains, and later conduct transfer learning via the single-view correspondence. However, the single-view correspondence may prevent these algorithms from further improvement due to the problem of incorrect correlation discovery. To tackle this problem, we propose a new method for transfer learning in a multi-view correspondence perspective, which is called Multi-View Principal Component Analysis (MVPCA) approach. MVPCA discovers the correspondence between bridge features representative across all domains and specific features from different domains respectively, and conducts the transfer learning by dimensionality reduction in a multi-view way, which can better depict the knowledge transfer. Experiments show that MVPCA can significantly reduce the cross domain prediction error of a baseline non-transfer method. With multi-view correspondence information incorporated to the single-view transfer learning method, MVPCA can further improve the performance of one state-of-the-art single-view method.

  12. Principal Components Analysis of Triaxial Vibration Data From Helicopter Transmissions

    Science.gov (United States)

    Tumer, Irem Y.; Huff, Edward M.

    2001-01-01

    Research on the nature of the vibration data collected from helicopter transmissions during flight experiments has led to several crucial observations believed to be responsible for the high rates of false alarms and missed detections in aircraft vibration monitoring systems. This work focuses on one such finding, namely, the need to consider additional sources of information about system vibrations. In this light, helicopter transmission vibration data, collected using triaxial accelerometers, were explored in three different directions, analyzed for content, and then combined using Principal Components Analysis (PCA) to analyze changes in directionality. In this paper, the PCA transformation is applied to 176 test conditions/data sets collected from an OH58C helicopter to derive the overall experiment-wide covariance matrix and its principal eigenvectors. The experiment-wide eigenvectors. are then projected onto the individual test conditions to evaluate changes and similarities in their directionality based on the various experimental factors. The paper will present the foundations of the proposed approach, addressing the question of whether experiment-wide eigenvectors accurately model the vibration modes in individual test conditions. The results will further determine the value of using directionality and triaxial accelerometers for vibration monitoring and anomaly detection.

  13. Double precision nonlinear cell for fast independent component analysis algorithm

    Science.gov (United States)

    Jain, V. K.

    2006-05-01

    Several advanced algorithms in defense and security objectives require high-speed computation of nonlinear functions. These include detection, localization, and identification. Increasingly, such computations must be performed in double precision accuracy in real time. In this paper, we develop a significance-based interpolative approach to such evaluations for double precision arguments. It is shown that our approach requires only one major multiplication, which leads to a unified and fast, two-cycle, VLSI architecture for mantissa computations. In contrast, the traditional iterative computations require several cycles to converge and typically these computations vary a lot from one function to another. Moreover, when the evaluation pertains to a compound or concatenated function, the overall time required becomes the sum of the times required by the individual operations. For our approach, the time required remains two cycles even for such compound or concatenated functions. Very importantly, the paper develops a key formula for predicting and bounding the worst case arithmetic error. This new result enables the designer to quickly select the architectural parameters without the expensive and intolerably long simulations, while guaranteeing the desired accuracy. The specific application focus is the mapping of the Independent Component Analysis (ICA) technique to a coarse-grain parallel-processing architecture.

  14. Dissection of the hormetic curve: analysis of components and mechanisms.

    Science.gov (United States)

    Lushchak, Volodymyr I

    2014-07-01

    The relationship between the dose of an effector and the biological response frequently is not described by a linear function and, moreover, in some cases the dose-response relationship may change from positive/adverse to adverse/positive with increasing dose. This complicated relationship is called "hormesis". This paper provides a short analysis of the concept along with a description of used approaches to characterize hormetic relationships. The whole hormetic curve can be divided into three zones: I - a lag-zone where no changes are observed with increasing dose; II - a zone where beneficial/adverse effects are observed, and III - a zone where the effects are opposite to those seen in zone II. Some approaches are proposed to analyze the molecular components involved in the development of the hormetic character of dose-response relationships with the use of specific genetic lines or inhibitors of regulatory pathways. The discussion is then extended to suggest a new parameter (half-width of the hormetic curve at zone II) for quantitative characterization of the hormetic curve. The problems limiting progress in the development of the hormesis concept such as low reproducibility and predictability may be solved, at least partly, by deciphering the molecular mechanisms underlying the hormetic dose-effect relationship.

  15. Construction Formula of Biological Age Using the Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Linpei Jia

    2016-01-01

    Full Text Available The biological age (BA equation is a prediction model that utilizes an algorithm to combine various biological markers of ageing. Different from traditional concepts, the BA equation does not emphasize the importance of a golden index but focuses on using indices of vital organs to represent the senescence of whole body. This model has been used to assess the ageing process in a more precise way and may predict possible diseases better as compared with the chronological age (CA. The principal component analysis (PCA is applied as one of the common and frequently used methods in the construction of the BA formula. Compared with other methods, PCA has its own study procedures and features. Herein we summarize the up-to-date knowledge about the BA formula construction and discuss the influential factors, so as to give an overview of BA estimate by PCA, including composition of samples, choices of test items, and selection of ageing biomarkers. We also discussed the advantages and disadvantages of PCA with reference to the construction mechanism, accuracy, and practicability of several common methods in the construction of the BA formula.

  16. Principal Component Analysis and Automatic Relevance Determination in Damage Identification

    CERN Document Server

    Mdlazi, L; Stander, C J; Scheffer, C; Heyns, P S

    2007-01-01

    This paper compares two neural network input selection schemes, the Principal Component Analysis (PCA) and the Automatic Relevance Determination (ARD) based on Mac-Kay's evidence framework. The PCA takes all the input data and projects it onto a lower dimension space, thereby reduc-ing the dimension of the input space. This input reduction method often results with parameters that have significant influence on the dynamics of the data being diluted by those that do not influence the dynamics of the data. The ARD selects the most relevant input parameters and discards those that do not contribute significantly to the dynamics of the data being modelled. The ARD sometimes results with important input parameters being discarded thereby compromising the dynamics of the data. The PCA and ARD methods are implemented together with a Multi-Layer-Perceptron (MLP) network for fault identification in structures and the performance of the two methods is as-sessed. It is observed that ARD and PCA give similar accu-racy le...

  17. A robust polynomial principal component analysis for seismic noise attenuation

    Science.gov (United States)

    Wang, Yuchen; Lu, Wenkai; Wang, Benfeng; Liu, Lei

    2016-12-01

    Random and coherent noise attenuation is a significant aspect of seismic data processing, especially for pre-stack seismic data flattened by normal moveout correction or migration. Signal extraction is widely used for pre-stack seismic noise attenuation. Principle component analysis (PCA), one of the multi-channel filters, is a common tool to extract seismic signals, which can be realized by singular value decomposition (SVD). However, when applying the traditional PCA filter to seismic signal extraction, the result is unsatisfactory with some artifacts when the seismic data is contaminated by random and coherent noise. In order to directly extract the desired signal and fix those artifacts at the same time, we take into consideration the amplitude variation with offset (AVO) property and thus propose a robust polynomial PCA algorithm. In this algorithm, a polynomial constraint is used to optimize the coefficient matrix. In order to simplify this complicated problem, a series of sub-optimal problems are designed and solved iteratively. After that, the random and coherent noise can be effectively attenuated simultaneously. Applications on synthetic and real data sets note that our proposed algorithm can better suppress random and coherent noise and have a better performance on protecting the desired signals, compared with the local polynomial fitting, conventional PCA and a L1-norm based PCA method.

  18. Biological agent detection based on principal component analysis

    Science.gov (United States)

    Mudigonda, Naga R.; Kacelenga, Ray

    2006-05-01

    This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.

  19. Cancer Care Coordination: a Systematic Review and Meta-Analysis of Over 30 Years of Empirical Studies.

    Science.gov (United States)

    Gorin, Sherri Sheinfeld; Haggstrom, David; Han, Paul K J; Fairfield, Kathleen M; Krebs, Paul; Clauser, Steven B

    2017-07-06

    According to a landmark study by the Institute of Medicine, patients with cancer often receive poorly coordinated care in multiple settings from many providers. Lack of coordination is associated with poor symptom control, medical errors, and higher costs. The aims of this systematic review and meta-analysis were to (1) synthesize the findings of studies addressing cancer care coordination, (2) describe study outcomes across the cancer continuum, and (3) obtain a quantitative estimate of the effect of interventions in cancer care coordination on service system processes and patient health outcomes. Of 1241 abstracts identified through MEDLINE, EMBASE, CINAHL, and the Cochrane Library, 52 studies met the inclusion criteria. Each study had US or Canadian participants, comparison or control groups, measures, times, samples, and/or interventions. Two researchers independently applied a standardized search strategy, coding scheme, and online coding program to each study. Eleven studies met the additional criteria for the meta-analysis; a random effects estimation model was used for data analysis. Cancer care coordination approaches led to improvements in 81 % of outcomes, including screening, measures of patient experience with care, and quality of end-of-life care. Across the continuum of cancer care, patient navigation was the most frequent care coordination intervention, followed by home telehealth; nurse case management was third in frequency. The meta-analysis of a subset of the reviewed studies showed that the odds of appropriate health care utilization in cancer care coordination interventions were almost twice (OR = 1.9, 95 % CI = 1.5-3.5) that of comparison interventions. This review offers promising findings on the impact of cancer care coordination on increasing value and reducing healthcare costs in the USA.

  20. Análise por componentes principais de espectros nexafs na especiação do molibdênio em catalisadores de hidrotratamento Principal component analysis of nexafs spectra for molybdenum speciation in hydrotreating catalysts

    Directory of Open Access Journals (Sweden)

    Arnaldo da C. Faro Jr

    2010-01-01

    Full Text Available Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L III and L II edges. The techniques of principal component analysis (PCA together with a linear combination analysis (LCA allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination.

  1. IJBlob: An ImageJ Library for Connected Component Analysis and Shape Analysis

    OpenAIRE

    2013-01-01

    The IJBlob library is a free ImageJ library for connected component analysis. Furthermore, it implements several contour based shape features to describe, filter or classify binary objects in images. Other features are extensible by the IJBlob extension framework. Because connected component labeling is a fundamental operation in many image processing pipelines (e.g. pattern recognition), the library could be useful for many ImageJ projects. The library is written in Java and the recent relea...

  2. Trimming of mammalian transcriptional networks using network component analysis

    Directory of Open Access Journals (Sweden)

    Liao James C

    2010-10-01

    Full Text Available Abstract Background Network Component Analysis (NCA has been used to deduce the activities of transcription factors (TFs from gene expression data and the TF-gene binding relationship. However, the TF-gene interaction varies in different environmental conditions and tissues, but such information is rarely available and cannot be predicted simply by motif analysis. Thus, it is beneficial to identify key TF-gene interactions under the experimental condition based on transcriptome data. Such information would be useful in identifying key regulatory pathways and gene markers of TFs in further studies. Results We developed an algorithm to trim network connectivity such that the important regulatory interactions between the TFs and the genes were retained and the regulatory signals were deduced. Theoretical studies demonstrated that the regulatory signals were accurately reconstructed even in the case where only three independent transcriptome datasets were available. At least 80% of the main target genes were correctly predicted in the extreme condition of high noise level and small number of datasets. Our algorithm was tested with transcriptome data taken from mice under rapamycin treatment. The initial network topology from the literature contains 70 TFs, 778 genes, and 1423 edges between the TFs and genes. Our method retained 1074 edges (i.e. 75% of the original edge number and identified 17 TFs as being significantly perturbed under the experimental condition. Twelve of these TFs are involved in MAPK signaling or myeloid leukemia pathways defined in the KEGG database, or are known to physically interact with each other. Additionally, four of these TFs, which are Hif1a, Cebpb, Nfkb1, and Atf1, are known targets of rapamycin. Furthermore, the trimmed network was able to predict Eno1 as an important target of Hif1a; this key interaction could not be detected without trimming the regulatory network. Conclusions The advantage of our new algorithm

  3. Evaluation of Staining-Dependent Colour Changes in Resin Composites Using Principal Component Analysis.

    Science.gov (United States)

    Manojlovic, D; Lenhardt, L; Milićević, B; Antonov, M; Miletic, V; Dramićanin, M D

    2015-10-09

    Colour changes in Gradia Direct™ composite after immersion in tea, coffee, red wine, Coca-Cola, Colgate mouthwash, and distilled water were evaluated using principal component analysis (PCA) and the CIELAB colour coordinates. The reflection spectra of the composites were used as input data for the PCA. The output data (scores and loadings) provided information about the magnitude and origin of the surface reflection changes after exposure to the staining solutions. The reflection spectra of the stained samples generally exhibited lower reflection in the blue spectral range, which was manifested in the lower content of the blue shade for the samples. Both analyses demonstrated the high staining abilities of tea, coffee, and red wine, which produced total colour changes of 4.31, 6.61, and 6.22, respectively, according to the CIELAB analysis. PCA revealed subtle changes in the reflection spectra of composites immersed in Coca-Cola, demonstrating Coca-Cola's ability to stain the composite to a small degree.

  4. A Component Analysis of Cognitive-Behavioral Treatment for Depression.

    Science.gov (United States)

    Jacobson, Neil S.; And Others

    1996-01-01

    Tested Beck's theory explaining efficacy of cognitive- behavioral therapy (CT) for depression. Involved randomly assigning 150 outpatients with major depression to a treatment focused on the behavioral activation (BA) component of CT, a treatment including BA and teaching skills to modify automatic thoughts, but excluding the components of CT…

  5. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...

  6. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    Science.gov (United States)

    Asachi, Maryam; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew

    2017-06-01

    In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  7. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    Directory of Open Access Journals (Sweden)

    Asachi Maryam

    2017-01-01

    Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  8. Preliminary Analysis of Competency Assessment of Organ Donation Coordinators in Hunan Province, China.

    Science.gov (United States)

    Luo, A; Xie, W; Luo, J; Deng, X

    The organ donation coordinator is indispensable in the process of organ donation and transplantation. The competency of coordinators is closely related to the organ donation rate. 1) To construct a competency assessment system for organ donation coordinators; and 2) to evaluate the competency level of coordinators in Hunan province. We constructed the competency model framework for coordinators based on the McClelland competency model and then extracted and screened the competency indicators by interview and Delphi methods. Next, we determined the weight of the indicators by an analytic hierarchy process method. Finally, we evaluated the competency level of 42 coordinators in Hunan province with the use of our assessment system. 1) We constructed the competency evaluation system for organ donation coordinators, which included 6 dimensions and 21 competency indicators. 2) The average competency score of 42 coordinators was 79.43 ± 8.51. Five coordinators were at qualified level (11.9%), 18 at moderate level (42.9%), 12 at good level (25.6%), and 7 at excellent level (16.7%). 1) This competency evaluation system for organ donation coordinators will provide scientific evidence for human resource management in health institutions. 2) The organ donation coordinators in Hunan were qualified, but their number was insufficient. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group

    Directory of Open Access Journals (Sweden)

    Lianbo Deng

    2014-01-01

    Full Text Available This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using a simulated annealing algorithm. In the algorithm, the supplier of each subsidiary was selected according to the evaluation result for independent procurement. Finally, the effect of different parameters on the coordinated procurement strategy was analysed. The results showed that the coordinated strategy can clearly save procurement costs; that the strategy appears to be more cooperative when the quality requirement is not stricter; and that the coordinated costs have a strong effect on the coordinated procurement strategy.

  10. TECHNICAL COORDINATION

    CERN Multimedia

    A. Ball

    Overview From a technical perspective, CMS has been in “beam operation” state since 6th November. The detector is fully closed with all components operational and the magnetic field is normally at the nominal 3.8T. The UXC cavern is normally closed with the radiation veto set. Access to UXC is now only possible during downtimes of LHC. Such accesses must be carefully planned, documented and carried out in agreement with CMS Technical Coordination, Experimental Area Management, LHC programme coordination and the CCC. Material flow in and out of UXC is now strictly controlled. Access to USC remains possible at any time, although, for safety reasons, it is necessary to register with the shift crew in the control room before going down.It is obligatory for all material leaving UXC to pass through the underground buffer zone for RP scanning, database entry and appropriate labeling for traceability. Technical coordination (notably Stephane Bally and Christoph Schaefer), the shift crew and run ...

  11. Normal coordinate analysis and fungicidal activity study on anilazine and its related compound using spectroscopic techniques

    Science.gov (United States)

    Sheeja Mol, Gilbert Pushpam; Arul Dhas, Deva Dhas; Hubert Joe, Isaac; Balachandran, Sreedharan

    2016-06-01

    The FTIR and FT-Raman spectra of anilazine have been recorded in the range 400-4000 cm-1 and 50-3500 cm-1 respectively. The optimized geometrical parameters of the compound were calculated using B3LYP method with 6-311G(d,p) basis set. The distribution of the vibrational bands were carried out with the help of normal coordinate analysis (NCA). The 1H and 13C nuclear spectra have been recorded and chemical shifts of the molecule were also calculated using the gauge independent atomic orbital (GIAO) method. The UV-Visible spectrum of the compound was recorded in the region 190-900 nm and the electronic properties were determined by time-dependent DFT (TD-DFT) approach. Anilazine was screened for its antifungal activity. Molecular docking studies are conducted to predict its fungicidal activity.

  12. Real-Time Performance Analysis of Infrastructure-based IEEE 802.11 Distributed Coordination Function

    CERN Document Server

    Xia, Feng; Wang, Linqiang; Hao, Ruonan

    2012-01-01

    With the increasing popularity of wireless networks, wireless local area networks (WLANs) have attracted significant research interest, which play a critical role in providing anywhere and anytime connectivity. For WLANs the IEEE 802.11 standard is the most mature technology and has been widely adopted for wireless networks. This paper analyzes real-time performance of the IEEE 802.11 standard that adopts the MAC protocol of Distributed Coordination Function (DCF) operating in infrastructure mode. Extensive simulations have been done to examine how the network performance in terms of realtime metrics including effective data rate, latency and packet loss rate will be impacted by some critical parameters (e.g. CWmin and packet payload). The results are presented and analyzed. The analysis of simulation results can provide support for parameter configuration and optimization of WLANs for realtime applications.

  13. Tracing cattle breeds with principal components analysis ancestry informative SNPs.

    Directory of Open Access Journals (Sweden)

    Jamey Lewis

    Full Text Available The recent release of the Bovine HapMap dataset represents the most detailed survey of bovine genetic diversity to date, providing an important resource for the design and development of livestock production. We studied this dataset, comprising more than 30,000 Single Nucleotide Polymorphisms (SNPs for 19 breeds (13 taurine, three zebu, and three hybrid breeds, seeking to identify small panels of genetic markers that can be used to trace the breed of unknown cattle samples. Taking advantage of the power of Principal Components Analysis and algorithms that we have recently described for the selection of Ancestry Informative Markers from genomewide datasets, we present a decision-tree which can be used to accurately infer the origin of individual cattle. In doing so, we present a thorough examination of population genetic structure in modern bovine breeds. Performing extensive cross-validation experiments, we demonstrate that 250-500 carefully selected SNPs suffice in order to achieve close to 100% prediction accuracy of individual ancestry, when this particular set of 19 breeds is considered. Our methods, coupled with the dense genotypic data that is becoming increasingly available, have the potential to become a valuable tool and have considerable impact in worldwide livestock production. They can be used to inform the design of studies of the genetic basis of economically important traits in cattle, as well as breeding programs and efforts to conserve biodiversity. Furthermore, the SNPs that we have identified can provide a reliable solution for the traceability of breed-specific branded products.

  14. MEASURING THE LEANNESS OF SUPPLIERS USING PRINCIPAL COMPONENT ANALYSIS TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Y. Zare Mehrjerdi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: A technique that helps management to reduce costs and improve quality is ‘lean supply chain management’, which focuses on the elimination of all wastes in every stage of the supply chain and is derived from ‘agile production’. This research aims to assess and rank the suppliers in an auto industry, based upon the concept of ‘production leanness’. The focus of this research is on the suppliers of a company called Touse-Omron Naein. We have examined the literature about leanness, and classified its criteria into ten dimensions and 76 factors. A questionnaire was used to collect the data, and the suppliers were ranked using the principal component analysis (PCA technique.

    AFRIKAANSE OPSOMMING: Lenige voorsieningsbestuur (“lean supply chain management” is ’n tegniek wat bestuur in staat stel om koste te verminder en gehalte te verbeter. Dit fokus op die vermindering van vermorsing op elke stadium van die voorsieningsketting en word afgelei van ratse vervaardiging (“agile production”. Hierdie navorsing poog om leweransiers in ’n motorbedryf te beoordeel aan die hand van die konsep van vervaardigingslenigheid (“production leanness”. Die navorsing fokus op leweransiers van ’n maatskappy genaamd Touse-Omron Naein. ’n Literatuurstudie aangaande lenigheid het gelei tot die klassifikasie van kriteria in tien dimensies en 76 faktore. ’n Vraelys is gebruik om die data te versamel en die leweransiers is in rangvolgorde geplaas aan die hand van die PCA-tegniek.

  15. WEB SERVICE SELECTION ALGORITHM BASED ON PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    Kang Guosheng; Liu Jianxun; Tang Mingdong; Cao Buqing

    2013-01-01

    Existing Web service selection approaches usually assume that preferences of users have been provided in a quantitative form by users.However,due to the subjectivity and vagueness of preferences,it may be impractical for users to specify quantitative and exact preferences.Moreover,due to that Quality of Service (QoS) attributes are often interrelated,existing Web service selection approaches which employ weighted summation of QoS attribute values to compute the overall QoS of Web services may produce inaccurate results,since they do not take correlations among QoS attributes into account.To resolve these problems,a Web service selection framework considering user's preference priority is proposed,which incorporates a searching mechanism with QoS range setting to identify services satisfying the user's QoS constraints.With the identified service candidates,based on the idea of Principal Component Analysis (PCA),an algorithm of Web service selection named PCAoWSS (Web Service Selection based on PCA) is proposed,which can eliminate the correlations among QoS attributes and compute the overall QoS of Web services accurately.After computing the overall QoS for each service,the algorithm ranks the Web service candidates based on their overall QoS and recommends services with top QoS values to users.Finally,the effectiveness and feasibility of our approach are validated by experiments,i.e.the selected Web service by our approach is given high average evaluation than other ones by users and the time cost of PCA-WSS algorithm is not affected acutely by the number of service candidates.

  16. Inverse spatial principal component analysis for geophysical survey data interpolation

    Science.gov (United States)

    Li, Qingmou; Dehler, Sonya A.

    2015-04-01

    The starting point for data processing, visualization, and overlay with other data sources in geological applications often involves building a regular grid by interpolation of geophysical measurements. Typically, the sampling interval along survey lines is much higher than the spacing between survey lines because the geophysical recording system is able to operate with a high sampling rate, while the costs and slower speeds associated with operational platforms limit line spacing. However, currently available interpolating methods often smooth data observed with higher sampling rate along a survey line to accommodate the lower spacing across lines, and much of the higher resolution information is not captured in the interpolation process. In this approach, a method termed as the inverse spatial principal component analysis (isPCA) is developed to address this problem. In the isPCA method, a whole profile observation as well as its line position is handled as an entity and a survey collection of line entities is analyzed for interpolation. To test its performance, the developed isPCA method is used to process a simulated airborne magnetic survey from an existing magnetic grid offshore the Atlantic coast of Canada. The interpolation results using the isPCA method and other methods are compared with the original survey grid. It is demonstrated that the isPCA method outperforms the Inverse Distance Weighting (IDW), Kriging (Geostatistical), and MINimum Curvature (MINC) interpolation methods in retaining detailed anomaly structures and restoring original values. In a second test, a high resolution magnetic survey offshore Cape Breton, Nova Scotia, Canada, was processed and the results are compared with other geological information. This example demonstrates the effective performance of the isPCA method in basin structure identification.

  17. A comparison of principal components using TPCA and nonstationary principal component analysis on daily air-pollutant concentration series

    Science.gov (United States)

    Shen, Chenhua

    2017-02-01

    We applied traditional principal component analysis (TPCA) and nonstationary principal component analysis (NSPCA) to determine principal components in the six daily air-pollutant concentration series (SO2, NO2, CO, O3, PM2.5 and PM10) in Nanjing from January 2013 to March 2016. The results show that using TPCA, two principal components can reflect the variance of these series: primary pollutants (SO2, NO2, CO, PM2.5 and PM10) and secondary pollutants (e.g., O3). However, using NSPCA, three principal components can be determined to reflect the detrended variance of these series: 1) a mixture of primary and secondary pollutants, 2) primary pollutants and 3) secondary pollutants. Various approaches can obtain different principal components. This phenomenon is closely related to methods for calculating the cross-correlation between each of the air pollutants. NSPCA is a more applicable, reliable method for analyzing the principal components of a series in the presence of nonstationarity and for a long-range correlation than can TPCA. Moreover, using detrended cross-correlation analysis (DCCA), the cross-correlation between O3 and NO2 is negative at a short timescale and positive at a long timescale. In hourly timescales, O3 is negatively correlated with NO2 due to a photochemical interaction, and in daily timescales, O3 is positively correlated with NO2 because of the decomposition of O3. In monthly timescales, the cross-correlation between O3 with NO2 has similar performance to those of O3 with meteorological elements. DCCA is again shown to be more appropriate for disclosing the cross-correlation between series in the presence of nonstationarity than is Pearson's method. DCCA can improve our understanding of their interactional mechanisms.

  18. Coordinated Analysis 101: A Joint Training Session Sponsored by LPI and ARES/JSC

    Science.gov (United States)

    Draper, D. S.; Treiman, A. H.

    2017-01-01

    The Lunar and Planetary Institute (LPI) and the Astromaterials Research and Exploration Science (ARES) Division, part of the Exploration Integration and Science Directorate at NASA Johnson Space Center (JSC), co-sponsored a training session in November 2016 for four early-career scientists in the techniques of coordinated analysis. Coordinated analysis refers to the approach of systematically performing high-resolution and -precision analytical studies on astromaterials, particularly the very small particles typical of recent and near-future sample return missions such as Stardust, Hayabusa, Hayabusa2, and OSIRIS-REx. A series of successive analytical steps is chosen to be performed on the same particle, as opposed to separate subsections of a sample, in such a way that the initial steps do not compromise the results from later steps in the sequence. The data from the entire series can then be integrated for these individual specimens, revealing important in-sights obtainable no other way. ARES/JSC scientists have played a leading role in the development and application of this approach for many years. Because the coming years will bring new sample collections from these and other planned NASA and international exploration missions, it is timely to begin disseminating specialized techniques for the study of small and precious astromaterial samples. As part of the Cooperative Agreement between NASA and the LPI, this training workshop was intended as the first in a series of similar training exercises that the two organizations will jointly sponsor in the coming years. These workshops will span the range of analytical capabilities and sample types available at ARES/JSC in the Astromaterials Research and Astro-materials Acquisition and Curation Offices. Here we summarize the activities and participants in this initial training.

  19. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the user terminals in the case of the distribution system to avoid interference by the fault again, rapidly complete the automatic identification, positioning, automatic fault isolation, network reconfiguration until the resumption of supply of non-fault section, a microprocessor-based relay protection device has developed. As the fault component theory is widely used in microcomputer protection, and fault component exists in the network of fault component, it is necessary to build up the fault component network when short circuit fault emerging and to draw the current and voltage component phasor diagram at fault point. In order to understand microcomputer protection based on the symmetrical component principle, we obtained the sequence current and sequence voltage according to the concept of symmetrical component. Distribution line directly to user-oriented power supply, the reliability of its operation determines the quality and level of electricity supply. In recent decades, because of the general power of the tireless efforts of scientists and technicians, relay protection technology and equipment application level has been greatly improved, but the current domestic production of computer hardware, protection devices are still outdated systems. Software development has maintenance difficulties and short survival time. With the factory automation system interface functions weak points, the network communication cannot meet the actual requirements. Protection principle configuration and device manufacturing process to be improved and so on.

  20. An Example of the Informative Potential of Polar Coordinate Analysis: Sprint Tactics in Elite 1,500-m Track Events

    Science.gov (United States)

    Aragón, Sonia; Lapresa, Daniel; Arana, Javier; Anguera, M. Teresa; Garzón, Belén

    2017-01-01

    Polar coordinate analysis is a powerful data reduction technique based on the Zsum statistic, which is calculated from adjusted residuals obtained by lag sequential analysis. Its use has been greatly simplified since the addition of a module in the free software program HOISAN for performing the necessary computations and producing…

  1. Correlation Analysis of some Growth, Yield, Yield Components and ...

    African Journals Online (AJOL)

    Keywords: Correlation, Wheat; growth, yield, yield components, grain quality. INTRODUCTION. Wheat ... macaroni, biscuits, cookies, cakes, pasta, noodles and couscous; beer, many .... and 6 WAS which ensured weed free plots. Fertilizer was ...

  2. Joint Procrustes Analysis for Simultaneous Nonsingular Transformation of Component Score and Loading Matrices

    Science.gov (United States)

    Adachi, Kohei

    2009-01-01

    In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…

  3. Vibrational spectra and normal coordinate analysis of 2-hydroxy-3-(2-methoxyphenoxy) propyl carbamate

    Science.gov (United States)

    Muthu, S.; Renuga, S.

    2014-11-01

    In this work, the vibrational spectral analysis was carried out by using FT-Raman and FTIR spectroscopy in the range 50-4000 cm-1 and 450-4000 cm-1 respectively, for 2-hydroxy-3-(2-methoxyphenoxy) propyl carbamate (2H3MPPLC) molecule. The molecular structure, fundamental vibrational frequencies and intensities of the vibrational bands were interpreted with the aid of structure optimizations and normal coordinate force field calculations based on density functional theory (DFT) and ab initio HF methods with 6-31G(d,p) basis set. The complete vibrational assignments of wave numbers were made on the basis of potential energy distribution (PED). The results of the calculations were applied to simulated spectra of the title compound, which show excellent agreement with observed spectra. The scaled B3LYP/6-31G(d,p) results show the best agreement with the experimental values over the other method. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The results confirm the occurrence of intramolecular charge-transfer (ICT) within the molecule. The dipole moment (μ), polarizability (α) and hyperpolarizability (β) of the investigated molecule has been computed using B3LYP/6-31G(d,p) method. Mulliken population analysis on atomic charges was also calculated. Besides, frontier molecular orbitals, molecular electrostatic potential (MEP) and thermodynamic properties were performed.

  4. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  5. Principal Component Analysis and Cluster Analysis in Profile of Electrical System

    Science.gov (United States)

    Iswan; Garniwa, I.

    2017-03-01

    This paper propose to present approach for profile of electrical system, presented approach is combination algorithm, namely principal component analysis (PCA) and cluster analysis. Based on relevant data of gross domestic regional product and electric power and energy use. This profile is set up to show the condition of electrical system of the region, that will be used as a policy in the electrical system of spatial development in the future. This paper consider 24 region in South Sulawesi province as profile center points and use principal component analysis (PCA) to asses the regional profile for development. Cluster analysis is used to group these region into few cluster according to the new variable be produced PCA. The general planning of electrical system of South Sulawesi province can provide support for policy making of electrical system development. The future research can be added several variable into existing variable.

  6. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  7. Paradigm-free mapping with morphological component analysis: getting most out of fMRI data

    Science.gov (United States)

    Caballero Gaudes, César; Van De Ville, Dimitri; Petridou, Natalia; Lazeyras, François; Gowland, Penny

    2011-09-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive imaging technique that maps the brain's response to neuronal activity based on the blood oxygenation level dependent (BOLD) effect. This work proposes a novel method for fMRI data analysis that enables the decomposition of the fMRI signal in its sources based on morphological descriptors. Beyond traditional fMRI hypothesis-based or blind data-driven exploratory approaches, this method allows the detection of BOLD responses without prior timing information. It is based on the deconvolution of the neuronal-related haemodynamic component of the fMRI signal with paradigm free mapping and also furnishes estimates of the movement-related effects, instrumental drifts and physiological fluctuations. Our algorithm is based on an overcomplete representation of the fMRI voxel time series with an additive linear model that is recovered by means of a L1-norm regularized least-squares estimators and an adapted block coordinate relaxation procedure. The performance of the technique is evaluated with simulated data and real experimental data acquired at 3T.

  8. Independent component analysis of edge information for face recognition

    CERN Document Server

    Karande, Kailash Jagannath

    2013-01-01

    The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos

  9. Research on Rural Consumer Demand in Hebei Province Based on Principal Component Analysis

    OpenAIRE

    MA Hui-zi; Zhao, Bang-hong; Xuan, Yong-sheng

    2011-01-01

    By selecting me time sequence data concerning influencing factors of rural consumer demand in Hebei Province from 2000 to 2010, this paper uses the principal component analysis method in multiplex econometric statistical analysis, constructs the principal component of consumer demand in Hebei Province, conducts regression on the dependent variable of consumer spending per capita in Hebei Province and the principal component of consumer demand so as to get principal component regression, and t...

  10. Differentially Variable Component Analysis (dVCA): Identifying Multiple Evoked Components using Trial-to-Trial Variability

    Science.gov (United States)

    Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.

    2003-01-01

    Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.

  11. Classification and analysis of emission-line galaxies using mean field independent component analysis

    CERN Document Server

    Allen, James T; Richardson, Chris T; Ferland, Gary J; Baldwin, Jack A

    2013-01-01

    We present an analysis of the optical spectra of narrow emission-line galaxies, based on mean field independent component analysis (MFICA). Samples of galaxies were drawn from the Sloan Digital Sky Survey (SDSS) and used to generate compact sets of `continuum' and `emission-line' component spectra. These components can be linearly combined to reconstruct the observed spectra of a wider sample of galaxies. Only 10 components - five continuum and five emission line - are required to produce accurate reconstructions of essentially all narrow emission-line galaxies; the median absolute deviations of the reconstructed emission-line fluxes, given the signal-to-noise ratio (S/N) of the observed spectra, are 1.2-1.8 sigma for the strong lines. After applying the MFICA components to a large sample of SDSS galaxies we identify the regions of parameter space that correspond to pure star formation and pure active galactic nucleus (AGN) emission-line spectra, and produce high S/N reconstructions of these spectra. The phys...

  12. Coordination of care in the Chinese health care systems: a gap analysis of service delivery from a provider perspective.

    Science.gov (United States)

    Wang, Xin; Birch, Stephen; Zhu, Weiming; Ma, Huifen; Embrett, Mark; Meng, Qingyue

    2016-10-12

    Increases in health care utilization and costs, resulting from the rising prevalence of chronic conditions related to the aging population, is exacerbated by a high level of fragmentation that characterizes health care systems in China. There have been several pilot studies in China, aimed at system-level care coordination and its impact on the full integration of health care system, but little is known about their practical effects. Huangzhong County is one of the pilot study sites that introduced organizational integration (a dimension of integrated care) among health care institutions as a means to improve system-level care coordination. The purposes of this study are to examine the effect of organizational integration on system-level care coordination and to identify factors influencing care coordination and hence full integration of county health care systems in rural China. We chose Huangzhong and Hualong counties in Qinghai province as study sites, with only Huangzhong having implemented organizational integration. A mixed methods approach was used based on (1) document analysis and expert consultation to develop Best Practice intervention packages; (2) doctor questionnaires, identifying care coordination from the perspective of service provision. We measured service provision with gap index, overlap index and over-provision index, by comparing observed performance with Best Practice; (3) semi-structured interviews with Chiefs of Medicine in each institution to identify barriers to system-level care coordination. Twenty-nine institutions (11 at county-level, 6 at township-level and 12 at village-level) were selected producing surveys with a total of 19 schizophrenia doctors, 23 diabetes doctors and 29 Chiefs of Medicine. There were more care discontinuities for both diabetes and schizophrenia in Huangzhong than in Hualong. Overall, all three index scores (measuring service gaps, overlaps and over-provision) showed similar tendencies for the two conditions

  13. Hamiltonian Analysis of 3-Dimensional Connection Dynamics in Bondi-like Coordinates

    Science.gov (United States)

    Huang, Chao-Guang; Kong, Shi-Bei

    2017-08-01

    The Hamiltonian analysis for a 3-dimensional connection dynamics of {s}{o}(1,2), spanned by {L-+, L-2, L+2 } instead of {L01, L02, L12 }, is first conducted in a Bondi-like coordinate system. The symmetry of the system is clearly presented. A null coframe with 3 independent variables and 9 connection coefficients are treated as basic configuration variables. All constraints and their consistency conditions, the solutions of Lagrange multipliers as well as the equations of motion are presented. There is no physical degree of freedom in the system. The Bañados-Teitelboim-Zanelli (BTZ) spacetime is discussed as an example to check the analysis. Unlike the ADM formalism, where only non-degenerate geometries on slices are dealt with and the Ashtekar formalism, where non-degenerate geometries on slices are mainly concerned though the degenerate geometries may be studied as well, in the present formalism the geometries on the slices are always degenerate though the geometries for the spacetime are not degenerate. Supported by National Natural Science Foundation of China under Grant Nos. 11275207 and 11690022

  14. Goal scoring in soccer: A polar coordinate analysis of motor skills used by Lionel Messi

    Directory of Open Access Journals (Sweden)

    Marta eCastañer

    2016-05-01

    Full Text Available Soccer research has traditionally focused on technical and tactical aspects of team play, but few studies have analyzed motor skills in individual actions, such as goal scoring. The objective of this study was to investigate how Lionel Messi, one of the world’s top soccer players, uses his motor skills and laterality in individual attacking actions resulting in a goal. We analyzed 103 goals scored by Messi between over a decade in three competitions: La Liga (n = 74, Copa del Rey (n = 8, and the UEFA Champions League (n = 21. We used an ad hoc observation instrument (OSMOS-soccer player comprising 10 criteria and 50 categories; polar coordinate analysis, a powerful data reduction technique, revealed significant associations for body part and orientation, foot contact zone, turn direction, and locomotion. No significant associations were observed for pitch area or interaction with opponents. Our analysis confirms significant associations between different aspects of motor skill use by Messi immediately before scoring, namely use of lower limbs, foot contact zones, turn direction, use of wings, and orientation of body to move towards the goal. Studies of motor skills in soccer could shed light on the qualities that make certain players unique.

  15. A Component Analysis of Schedule Thinning during Functional Communication Training

    Science.gov (United States)

    Betz, Alison M.; Fisher, Wayne W.; Roane, Henry S.; Mintz, Joslyn C.; Owen, Todd M.

    2013-01-01

    One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can…

  16. Pyrolysis-thermogravimetric analysis of tyres and tyre components

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P.T.; Besler, S. [University of Leeds, Leeds (United Kingdom). Dept. of Fuel and Energy

    1995-09-01

    Three samples of tyre of known rubber composition were pyrolysed in a thermogravimetric analyser under nitrogen at heating rates from 5 to 80 K min{sup -1}. In addition, the major rubber components of the tyres - styrene-butadiene rubber (SBR), natural rubber (NR) and polybutadiene rubber (BR) - were pyrolysed separately under the same conditions. The kinetic parameters were calculated. An increase in heating rate shifted thermal degradation to higher temperatures. The tyre samples showed two distinct areas of weight loss, representing a lower and a higher temperature of decomposition. The char yield from the tyres, 32-42 wt%, depended on tyre composition. The char yields from the pure rubber components were all {lt}4 wt%, suggesting that the carbon black component of the tyre is the main source of char. SBR decomposed mainly at higher temperatures of pyrolysis, NR at lower temperatures, and BR at both higher and lower temperatures. The thermal decomposition of the tyre could be related to their composition. The mechanism of the thermal degradation of tyres and tyre rubber components is reviewed. 29 refs., 4 figs., 6 tabs.

  17. Analysis of soft rock mineral components and roadway failure mechanism

    Institute of Scientific and Technical Information of China (English)

    CHEN Jie

    2001-01-01

    The mineral components and microstructure of soft rock sampled from ro adway floor in Xiagou pit are determined by X-ray diffraction and scanning elec t ron microscope. Combined with the test of expansion and water softening prop erty of the soft rock, the roadway failure mechanism is analyzed, and the reason able repair supporting principle of roadway is put forward.

  18. Analysis of Strongly Connected Analysis of Strongly Connected Components (SCC Using Dynamic Graph Representation

    Directory of Open Access Journals (Sweden)

    Saleh Alshomrani

    2012-07-01

    Full Text Available Graphs are the basis of many real life applications. In our research we compare and analyse strongly connected components algorithm by using general techniques for efficient implementation. This experimental procedure exemplify in two contexts. 1. Comparison of strongly connected components algorithms. 2. Analysis of particular algorithm. Such a practice will enable java programmers, especially for those who work on such algorithms to use them efficiently. In this paper we described algorithms implementation, test and benchmark to experiment the performance of algorithms. During experimenting we found some interesting results as Cheriyan-Mehlhorn-Gabow algorithm outperform then Tarjan's algorithm

  19. Motor Coordination in Autism Spectrum Disorders: A Synthesis and Meta-Analysis

    Science.gov (United States)

    Fournier, Kimberly A.; Hass, Chris J.; Naik, Sagar K.; Lodha, Neha; Cauraugh, James H.

    2010-01-01

    Are motor coordination deficits an underlying cardinal feature of Autism Spectrum Disorders (ASD)? Database searches identified 83 ASD studies focused on motor coordination, arm movements, gait, or postural stability deficits. Data extraction involved between-group comparisons for ASD and typically developing controls (N = 51). Rigorous…

  20. A post-modification approach to independent component analysis for resolution of overlapping GC/MS signals: from independent components to chemical components

    Institute of Scientific and Technical Information of China (English)

    WANG Wei; CAI WenSheng; SHAO XueGuang

    2007-01-01

    Independent component analysis (ICA) has demonstrated its power to extract mass spectra from overlapping GC/MS signal. However, there is still a problem that mass spectra with negative peaks at some m/z will be obtained in the resolved results when there are overlapping peaks in the mass spectra of a mixture. Based on a detail theoretical analysis of the preconditions for ICA and the non-negative property of GC/MS signals, a post-modification based on chemical knowledge (PMBK) strategy is proposed to solve this problem. By both simulated and experimental GC/MS signals, it was proved that the PMBK strategy can improve the resolution effectively.

  1. The IAEA Coordinated Research Program on HTGR Uncertainty Analysis: Phase I Status and Initial Results

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard; Bostelmann, Friederike; Ivanov, Kostadin

    2014-10-01

    required confidence level. In order to address uncertainty propagation in analysis and methods in the HTGR community the IAEA initiated a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) that officially started in 2013. Although this project focuses specifically on the peculiarities of HTGR designs and its simulation requirements, many lessons can be learned from the LWR community and the significant progress already made towards a consistent methodology uncertainty analysis. In the case of LWRs the NRC has already in 1988 amended 10 CFR 50.46 to allow best-estimate (plus uncertainties) calculations of emergency core cooling system performance. The Nuclear Energy Agency (NEA) of the Organization for Economic Co-operation and Development (OECD) also established an Expert Group on "Uncertainty Analysis in Modelling" which finally led to the definition of the "Benchmark for Uncertainty Analysis in Modelling (UAM) for Design, Operation and Safety Analysis of LWRs". The CRP on HTGR UAM will follow as far as possible the on-going OECD Light Water Reactor UAM benchmark activity.

  2. Preventive Replacement Decisions for Dragline Components Using Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Nuray Demirel

    2016-05-01

    Full Text Available Reliability-based maintenance policies allow qualitative and quantitative evaluation of system downtimes via revealing main causes of breakdowns and discussing required preventive activities against failures. Application of preventive maintenance is especially important for mining machineries since production is highly affected from machinery breakdowns. Overburden stripping operations are one of the integral parts in surface coal mine productions. Draglines are extensively utilized in overburden stripping operations and they achieve earthmoving activities with bucket capacities up to 168 m3. The massive structure and operational severity of these machines increase the importance of performance awareness for individual working components. Research on draglines is rarely observed in the literature and maintenance studies for these earthmovers have been generally ignored. On this basis, this paper offered a comprehensive reliability assessment for two draglines currently operating in the Tunçbilek coal mine and discussed preventive replacement for wear-out components of the draglines considering cost factors.

  3. Dorsal Hand Vein Biometry by Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    V.H.Yadav

    2012-07-01

    Full Text Available Biometric authentication provides a high security and reliable approach to be used in security access system. Personal identification based on hand vein patterns is a newly developed recent year. The pattern of blood veins in the hand is unique to every individual, even among identical twins, and it do notchange over time. These properties of uniqueness, stability and strong immunity to forgery of the vein patterns make it a potentially good biometric trait which offers greater security and reliable features for personal identification. In this study, we have used the BOSPHORUS hand vein database which has been taken under a source of NIR infrared radiation. For feature extraction we applied appearance based method ICA which produces independent components. To control over the number of independent component we preprocessed data by PCA before applying ICA, and gives good experimental results.

  4. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  5. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    Directory of Open Access Journals (Sweden)

    Nan Lin

    Full Text Available Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis.

  6. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    Energy Technology Data Exchange (ETDEWEB)

    Whalen, Michael R.; /Stevens Tech. /SLAC

    2011-06-22

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/{sup o}C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  7. COMPONENTS OF THE UNEMPLOYMENT ANALYSIS IN CONTEMPORARY ECONOMIES

    Directory of Open Access Journals (Sweden)

    Ion Enea-SMARANDACHE

    2010-03-01

    Full Text Available The unemployment is a permanent phenomenon in majority countries of the world, either with advanced economies, either in course of developed economies, and the implications and the consequences are more complexes, so that, practically, the fight with unemployment becomes a fundamental objective for the economy politics. In context, the authors proposed to set apart essentially components for unemployment analyse with the scope of identification the measures and the instruments of counteracted.

  8. Distribution of the residual roots in principal components analysis.

    Directory of Open Access Journals (Sweden)

    A. M. Kshirsagar

    1964-10-01

    Full Text Available The latent of distribution of latent roots of the covariance martix of normal variables, when a hypothetical linear function of the variables is eliminated, is derived in this paper. The relation between original roots and the residual roots- after elimination of, is also derived by an analytical method. An exact test for the goodness of fit of a single nonisotropic hypothetical principal components, using the residual roots, is then obtained.

  9. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Mr. Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the userterminals in the case of the distribution system toavoid interference by the fault again, rapidlycomplete the automatic identification, positioning,automatic fault isolation, network reconfigurationuntil the resumption of supply of non-fault section,a microprocessor-based relay protection device hasdeveloped. As the fault component theory is widelyused in microcomputer protection, and faultcomponent exists in the network of faultcomponent, it is necessary to build up the faultcomponent network when short circuit faultemerging and to draw the current and voltagecomponent phasor diagram at fault point. In orderto understand microcomputer protection based onthe symmetrical component principle, we obtainedthe sequence current and sequence voltageaccording to the concept of symmetrical component.Distribution line directly to user-oriented powersupply, the reliability of its operation determines thequality and level of electricity supply. In recentdecades, because of the general power of the tirelessefforts of scientists and technicians, relay protectiontechnology and equipment application level hasbeen greatly improved, but the current domesticproduction of computer hardware, protectiondevices are still outdated systems. Softwaredevelopment has maintenance difficulties and shortsurvival time. With the factory automation systeminterface functions weak points, the networkcommunication cannot meet the actualrequirements. Protection principle configurationand device manufacturing process to be improvedand so on.

  10. Analysis of co-ordination between breathing and exercise rhythms in man.

    Science.gov (United States)

    Bernasconi, P; Kohl, J

    1993-11-01

    1. The purpose of the present study was to analyse the incidence and type of coordination between breathing rhythm and leg movements during running and to assess the effect of co-ordination on the running efficiency, as well as to compare the results with those found during cycling. 2. The experiments were carried out on thirty-four untrained volunteers exercising at two work loads (60 and 80% of subject's physical work capacity 170) on a treadmill. In addition nineteen of the subjects exercised at the same two work loads on a bicycle ergometer. The subjects were running at both work loads in three different modes in randomized order: with normal arm movements, without arm movements and with breathing paced by an acoustic signal which was triggered by the leg movement. 3. Respiratory variables, oxygen uptake and leg movements were continuously recorded and evaluated on-line. The degree of co-ordination was expressed as a percentage of inspirations and/or expirations starting in the same phase of the step or pedalling cycle. 4. The average degree of co-ordination was higher during running (up to 40%) than during cycling (about 20%) during both work loads. The difference in the degree of co-ordination between running and cycling is probably not due to the lack of arm movements during cycling since the degree of co-ordination during running with and without arm movements was the same. 5. The degree of co-ordination during running increased slightly but not significantly with increasing work load and could be increased significantly by paced breathing. 6. The co-ordination between breathing and running rhythms occurred in three different patterns: (a) breathing was co-ordinated all the time with the same phase of step, (b) co-ordination switched suddenly from one phase of step to another and (c) co-ordination ensued alternatively once on the right and once on the left leg movement. During cycling the pattern described in (a) occurred almost exclusively. 7. During

  11. Imaging lipid distributions in model monolayers by ToF-SIMS with selectively deuterated components and principal components analysis

    Energy Technology Data Exchange (ETDEWEB)

    Biesinger, Mark C. [Surface Science Western, University of Western Ontario, London, Ont., N6A 5B7 (Canada)]. E-mail: biesingr@uwo.ca; Miller, David J. [Surface Science Western, University of Western Ontario, London, Ont., N6A 5B7 (Canada); Department of Chemistry, University of Western Ontario, London, Ont., N6A 5B7 (Canada); Harbottle, Robert R. [Department of Chemistry, University of Western Ontario, London, Ont., N6A 5B7 (Canada); Possmayer, Fred [Department of Obstetrics and Gynecology, University of Western Ontario, London, Ont., N6A 5B7 (Canada); McIntyre, N. Stewart [Surface Science Western, University of Western Ontario, London, Ont., N6A 5B7 (Canada); Department of Chemistry, University of Western Ontario, London, Ont., N6A 5B7 (Canada); Petersen, Nils O. [National Institute for Nanotechnology and Department of Chemistry, University of Alberta W6-017 ECERF Bldg, 9107-116th Street, Edmonton, Alta., T6G 2V4 (Canada)

    2006-07-30

    Time of flight secondary ion mass spectrometry (ToF-SIMS) provides the capability to image the distribution of molecular ions and their associated fragments that are emitted from monolayer films. ToF-SIMS can be applied to the analysis of monolayers of complex lipid mixtures that act as a model to understand the organization of cell membranes into solid-like domains called lipid rafts. The ability to determine the molecular distribution of lipids using ToF-SIMS in monolayer films is also important in studies of the function of pulmonary surfactant. One of the limitations of the use of ToF-SIMS to studies of complex lipid mixtures found in biological systems, arises from the similarity of the mass fragments that are emitted from the components of the lipid mixture. The use of selectively deuterated components in a mixture overcomes this limitation and results in an unambiguous assignment of specific lipids to particular surface domains. The use of deuterium labeling to identify specific lipids in a multi-component mixture can be done by the deuteration of a single lipid or by the addition of more than one lipid with selectively deuterated components. The incorporation of deuterium into the lipid chains does not alter the miscibility or phase behavior of these systems. The use of deuterium labeling to identify lipids and determine their distribution in monolayer films will be demonstrated using two biological systems. Principal components analysis (PCA) is used to further analyze these deuterated systems checking for the origin of the various mass fragments present.

  12. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  13. A new modeling of loading margin and its sensitivities using rectangular voltage coordinates in voltage stability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Vander Menengoy da; Rosa, Arlei Lucas de Sousa [Department of Electrical Engineering, Federal University of Juiz de Fora, Campus Universitario - Bairro Martelos, 36036-330 Juiz de Fora - MG (Brazil); Guedes, Magda Rocha [Federal Center of Technologic Education of Minas Gerais - CEFET, Rua Jose Peres, 558 36700-000 Leopoldina - MG (Brazil); Cantarino, Marcelo [Centrais Eletricas Brasileiras S.A - ELETROBRAS, Av. Rio Branco, 53, Centro, 14 andar, 20090-004 Rio de Janeiro - RJ (Brazil)

    2010-05-15

    This paper presents new mathematical models to compute the loading margin, as well as to perform the sensitivity analysis of loading margin with respect to different electric system parameters. The innovative idea consists of evaluating the performance of these methods when the power flow equations are expressed with the voltages in rectangular coordinates. The objective is to establish a comparative process with the conventional models expressed in terms of power flow equations with the voltages in polar coordinates. IEEE test system and a South-Southeastern Brazilian network are used in the simulations. (author)

  14. Models and applications for space weather forecasting and analysis at the Community Coordinated Modeling Center.

    Science.gov (United States)

    Kuznetsova, Maria

    The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.

  15. Social Activity and Cognitive Functioning Over Time: A Coordinated Analysis of Four Longitudinal Studies

    Directory of Open Access Journals (Sweden)

    Cassandra L. Brown

    2012-01-01

    Full Text Available Social activity is typically viewed as part of an engaged lifestyle that may help mitigate the deleterious effects of advanced age on cognitive function. As such, social activity has been examined in relation to cognitive abilities later in life. However, longitudinal evidence for this hypothesis thus far remains inconclusive. The current study sought to clarify the relationship between social activity and cognitive function over time using a coordinated data analysis approach across four longitudinal studies. A series of multilevel growth models with social activity included as a covariate is presented. Four domains of cognitive function were assessed: reasoning, memory, fluency, and semantic knowledge. Results suggest that baseline social activity is related to some, but not all, cognitive functions. Baseline social activity levels failed to predict rate of decline in most cognitive abilities. Changes in social activity were not consistently associated with cognitive functioning. Our findings do not provide consistent evidence that changes in social activity correspond to immediate benefits in cognitive functioning, except perhaps for verbal fluency.

  16. Mental health network governance and coordination: comparative analysis across Canadian regions

    Directory of Open Access Journals (Sweden)

    Mary E. Wiktorowicz

    2010-10-01

    Full Text Available Objective: Modes of governance were compared in ten local mental health networks in diverse contexts (rural/urban and regionalized/non-regionalized to clarify the governance processes that foster inter-organizational collaboration and the conditions that support them. Methods: Case studies of ten local mental health networks were developed using qualitative methods of document review, semi-structured interviews and focus groups that incorporated provincial policy, network and organizational levels of analysis. Results: Mental health networks adopted either a 'corporate structure', 'mutual adjustment 'or an 'alliance 'governance model. A 'corporate structure 'supported by regionalization offered the most direct means for local governance to attain inter-organizational collaboration. The likelihood that networks with an 'alliance 'model developed coordination processes depended on the presence of the following conditions: a moderate number of organizations, goal consensus and trust among the organizations, and network-level competencies. In the small and mid-sized urban networks where these conditions were met their 'alliance 'realized the inter-organizational collaboration sought. In the large urban and rural networks where these conditions were not met, externally brokered forms of network governance were required to support 'alliance 'based models. Discussion: In metropolitan and rural networks with such shared forms of network governance as an 'alliance 'or 'voluntary mutual adjustment', external mediation by a regional or provincial authority was an important lever to foster inter-organizational collaboration.

  17. Mental health network governance and coordination: comparative analysis across Canadian regions

    Directory of Open Access Journals (Sweden)

    Mary E. Wiktorowicz

    2010-10-01

    Full Text Available Objective: Modes of governance were compared in ten local mental health networks in diverse contexts (rural/urban and regionalized/non-regionalized to clarify the governance processes that foster inter-organizational collaboration and the conditions that support them.Methods: Case studies of ten local mental health networks were developed using qualitative methods of document review, semi-structured interviews and focus groups that incorporated provincial policy, network and organizational levels of analysis.Results: Mental health networks adopted either a corporate structure, mutual adjustment or an alliance governance model. A corporate structure supported by regionalization offered the most direct means for local governance to attain inter-organizational collaboration. The likelihood that networks with an alliance model developed coordination processes depended on the presence of the following conditions: a moderate number of organizations, goal consensus and trust among the organizations, and network-level competencies. In the small and mid-sized urban networks where these conditions were met their alliance realized the inter-organizational collaboration sought. In the large urban and rural networks where these conditions were not met, externally brokered forms of network governance were required to support alliance based models.Discussion: In metropolitan and rural networks with such shared forms of network governance as an alliance or voluntary mutual adjustment, external mediation by a regional or provincial authority was an important lever to foster inter-organizational collaboration.

  18. Global Analysis of miRNA Gene Clusters and Gene Families Reveals Dynamic and Coordinated Expression

    Directory of Open Access Journals (Sweden)

    Li Guo

    2014-01-01

    Full Text Available To further understand the potential expression relationships of miRNAs in miRNA gene clusters and gene families, a global analysis was performed in 4 paired tumor (breast cancer and adjacent normal tissue samples using deep sequencing datasets. The compositions of miRNA gene clusters and families are not random, and clustered and homologous miRNAs may have close relationships with overlapped miRNA species. Members in the miRNA group always had various expression levels, and even some showed larger expression divergence. Despite the dynamic expression as well as individual difference, these miRNAs always indicated consistent or similar deregulation patterns. The consistent deregulation expression may contribute to dynamic and coordinated interaction between different miRNAs in regulatory network. Further, we found that those clustered or homologous miRNAs that were also identified as sense and antisense miRNAs showed larger expression divergence. miRNA gene clusters and families indicated important biological roles, and the specific distribution and expression further enrich and ensure the flexible and robust regulatory network.

  19. ANALYSIS OF BREATHER STATE IN THIN BAR BY USING COLLECTIVE COORDINATE

    Institute of Scientific and Technical Information of China (English)

    ZHAO Guang-hui; ZHANG Nian-mei; YANG Gui-tong

    2006-01-01

    Considering Peierls-Nabarro (P-N) force and viscous effect of material, the dynamic behavior of one-dimensional infinite metallic thin bar subjected to axially periodic load is investigated. Governing equation, which is sine-Gordon type equation, is derived. By means of collective-coordinates, the partial equation can be reduced to ordinary differential dynamical system to describe motion of breather. Nonlinear dynamic analysis shows that the amplitude and frequency of P-N force would influence positions of hyperbolic saddle points and change subharmonic bifurcation point, while the path to chaos through odd subharmonic bifurcations remains. Several examples are taken to indicate the effects of amplitude and period of P-N force on the dynamical response of the bar. The simulation states that the area of chaos is half-infinite. This area increases along with enhancement of the amplitude of P-N force. And the frequency of P-N force has similar influence on the system.

  20. Abstract Interfaces for Data AnalysisComponent Architecture for Data Analysis Tools

    Institute of Scientific and Technical Information of China (English)

    G.Barrand; P.Binko; 等

    2001-01-01

    The fast turnover of software technologies,in particular in the domain of in teractivity(covering user interface and visualisation)makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete.At the HepVis '99 workshop,a working group has been formed to improve the rpoduction of software tools for data analysis in HENP.Beside promoting a distributed development organisation,one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques.An initial domain analysis has come up with several categories(componets)found in typical data analysis tools:historams,Ntuples,Functions,Vectors,Fitter,Plotter,Analyzer and Controller,Special Emphasis was put on reducing the couplings between the categories to a minimum,thus optimising re-use and maintainability of any component individually.The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++(Anaphe/Lizard,Openscientist)and Java(Java Analysis Studio),A special implementation aims at accessing the Java Liraries(through their Abstract Interfaces)from C++.This paper giver an overview of the architecture and design of the various components for data analysis as discussed in AIDA.

  1. Quantification of not-dipolar components of atrial depolarization by principal component analysis of the P-wave

    Directory of Open Access Journals (Sweden)

    Federica Censi

    2012-06-01

    Full Text Available BACKGROUND: Principal component analysis (PCA of the T-wave has been demonstrated to quantify the dipolar and not-dipolar components of the ventricular activation, the latter reflecting repolarization heterogeneity. Accordingly, the PCA of the P-wave could help in analyzing the heterogeneous propagation of sinus impulses in the atria, which seems to predispose to fibrillation. AIM: The aim of this study is to perform the PCA of the P-wave in patients prone to atrial fibrillation (AF. METHODS: PCA is performed on P-waves extracted by averaging technique from ECG recordings acquired using a 32-lead mapping system (2048 Hz, 24 bit, 0-400 Hz bandwidth. We extracted PCA parameters related to the dipolar and not dipolar components of the P-wave using the first 3 eigenvalues and the cumulative percent of variance explained by the first 3 PCs (explained variance EV. RESULTS AND CONCLUSIONS: We found that the EV associated to the low risk patients is higher than that associated to the high risk patients, and that, correspondingly, the first eigenvalue is significantly lower while the second one is significantly higher in the high risk patients respect to the low risk group. Factor loadings showed that on average all leads contribute to the first principal component.

  2. Isolation and analysis of odorous components in swine manure.

    Science.gov (United States)

    Yasuhara, A; Fuwa, K

    1983-12-23

    Systematic procedures are described for the isolation and extraction of odorous components in swine faeces, urine and rotten mixtures of swine faeces and urine. Samples were frozen and subjected to vacuum distillation in the frozen state. The distillate was continuously extracted with diethyl ether. The residue was extracted with diethyl ether and the extract was subjected to vacuum distillation. The former extract and the latter distillate were combined and concentrated. Recovery by these procedures was considered. Odorous compounds isolated were analyzed by gas chromatography and gas chromatography-mass spectrometry.

  3. Mass analysis of the components separated from printed circuit boards

    Directory of Open Access Journals (Sweden)

    Hana Charvátová

    2010-06-01

    Full Text Available Methods of effective and ecological recycling of printed circuit boards (PCBs are searched all over the world at this time.The material composition and temperature properties of PCB are necessary to be known for an optimal recycling technology. For thispurpose we analyzed weight ratio of the electronic components moulded on the selected kinds of PCBs and next we formulatedmathematic model of temperature field in PCB during a grinding process in that the metal layers are separated from the plasticelements. We present the obtained results in this paper.

  4. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... base these illustrations on two fMRI BOLD data sets — one from a simple finger tapping experiment and the other from an experiment on object recognition in the ventral temporal lobe....

  5. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bostelmann, F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on

  6. C -IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    Science.gov (United States)

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; Junghans, Christoph; Mukherji, Debashish

    2016-05-01

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion ( C -IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. Additionally, we observe much faster convergence within C -IBI compared to IBI. To validate the robustness, we apply C -IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  7. Quality assessment of cortex cinnamomi by HPLC chemical fingerprint, principle component analysis and cluster analysis.

    Science.gov (United States)

    Yang, Jie; Chen, Li-Hong; Zhang, Qin; Lai, Mao-Xiang; Wang, Qiang

    2007-06-01

    HPLC fingerprint analysis, principle component analysis (PCA), and cluster analysis were introduced for quality assessment of Cortex cinnamomi (CC). The fingerprint of CC was developed and validated by analyzing 30 samples of CC from different species and geographic locations. Seventeen chromatographic peaks were selected as characteristic peaks and their relative peak areas (RPA) were calculated for quantitative expression of the HPLC fingerprints. The correlation coefficients of similarity in chromatograms were higher than 0.95 for the same species while much lower than 0.6 for different species. Besides, two principal components (PCs) have been extracted by PCA. PC1 separated Cinnamomum cassia from other species, capturing 56.75% of variance while PC2 contributed for their further separation, capturing 19.08% variance. The scores of the samples showed that the samples could be clustered reasonably into different groups corresponding to different species and different regions. The scores and loading plots together revealed different chemical properties of each group clearly. The cluster analysis confirmed the results of PCA analysis. Therefore, HPLC fingerprint in combination with chemometric techniques provide a very flexible and reliable method for quality assessment of traditional Chinese medicines.

  8. Two-Component Structure of the Hbeta Broad-Line Region in Quasars. I. Evidence from Spectral Principal Component Analysis

    CERN Document Server

    Hu, Chen; Ho, Luis C; Ferland, Gary J; Baldwin, Jack A; Wang, Ye

    2012-01-01

    We report on a spectral principal component analysis (SPCA) of a sample of 816 quasars, selected to have small Fe II velocity shifts with spectral coverage in the rest wavelength range 3500--5500 \\AA. The sample is explicitly designed to mitigate spurious effects on SPCA induced by Fe II velocity shifts. We improve the algorithm of SPCA in the literature and introduce a new quantity, \\emph{the fractional-contribution spectrum}, that effectively identifies the emission features encoded in each eigenspectrum. The first eigenspectrum clearly records the power-law continuum and very broad Balmer emission lines. Narrow emission lines dominate the second eigenspectrum. The third eigenspectrum represents the Fe II emission and a component of the Balmer lines with kinematically similar intermediate velocity widths. Correlations between the weights of the eigenspectra and parametric measurements of line strength and continuum slope confirm the above interpretation for the eigenspectra. Monte Carlo simulations demonstr...

  9. SNIa detection in the SNLS photometric analysis using Morphological Component Analysis

    CERN Document Server

    Möller, A; Lanusse, F; Neveu, J; Palanque-Delabrouille, N

    2015-01-01

    Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data differed photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of o...

  10. Modeling Hurricanes using Principle Component Analysis in conjunction with Non-Response Analysis

    CERN Document Server

    Wooten, Rebecca D

    2016-01-01

    This paper demonstrates how principle component analysis can be used to determine the distinct factors that house the terms that explain the variance among the co-dependent variables and how non-response analysis can be applied to model the non-functional relationship that exist in a dynamic system. Moreover, the analysis indicates that there are pumping actions or ebb and flow between the pressure and the water temperature readings near the surface of the water days before a tropical storm forms in the Atlantic Basin and that there is a high correlation between storm conditions and buoy conditions three-four days before a storm forms. Further analysis shows that that the relationship among the variables is conical.

  11. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    Energy Technology Data Exchange (ETDEWEB)

    Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  12. Neural signatures of fairness-related normative decision making in the ultimatum game: a coordinate-based meta-analysis.

    Science.gov (United States)

    Feng, Chunliang; Luo, Yue-Jia; Krueger, Frank

    2015-02-01

    The willingness to incur personal costs to enforce prosocial norms represents a hallmark of human civilization. Although recent neuroscience studies have used the ultimatum game to understand the neuropsychological mechanisms that underlie the enforcement of fairness norms; however, a precise characterization of the neural systems underlying fairness-related norm enforcement remains elusive. In this study, we used a coordinate-based meta-analysis on functional magnetic resonance imaging (fMRI) studies using the ultimatum game with the goal to provide an additional level of evidence for the refinement of the underlying neural architecture of this human puzzling behavior. Our results demonstrated a convergence of reported activation foci in brain networks associated with psychological components of fairness-related normative decision making, presumably reflecting a reflexive and intuitive system (System 1) and a reflective and deliberate system (System 2). System 1 (anterior insula, ventromedial prefrontal cortex [PFC]) may be associated with the reflexive and intuitive responses to norm violations, representing a motivation to punish norm violators. Those intuitive responses conflict with economic self-interest, encoded in the dorsal anterior cingulate cortex (ACC), which may engage cognitive control from a reflective and deliberate System 2 to resolve the conflict by either suppressing (ventrolateral PFC, dorsomedial PFC, left dorsolateral PFC, and rostral ACC) the intuitive responses or over-riding self-interest (right dorsolateral PFC). Taken together, we suggest that fairness-related norm enforcement recruits an intuitive system for rapid evaluation of norm violations and a deliberate system for integrating both social norms and self-interest to regulate the intuitive system in favor of more flexible decision making. © 2014 Wiley Periodicals, Inc.

  13. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    Science.gov (United States)

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  14. Assessment and analysis components of physical fitness of students

    Directory of Open Access Journals (Sweden)

    Kashuba V.A.

    2012-08-01

    Full Text Available It is assessed components of a physical fitness of students. It is analyzed the internal and external factors affecting the quality of life for students. The study involved more than 200 students. Found that students represent a category of people with elevated risk factors, which include the nervous and mental tension, constant violations of the food, work and leisure, in their way of life there is a lack of care about their health. It is noted that the existing approaches to promoting physical fitness of students are inefficient and require the development and implementation of brand new contemporary theoretical foundations and practical approaches to the problem of increasing the activity of students. It is proved that sold today in the practice of higher education forms, methods, learning tools do not allow to fully ensure the implementation of approaches to promoting physical fitness of students do not meet the requirements for the preparation of the modern health professional.

  15. Parallel TREE code for two-component ultracold plasma analysis

    Science.gov (United States)

    Jeon, Byoungseon; Kress, Joel D.; Collins, Lee A.; Grønbech-Jensen, Niels

    2008-02-01

    The TREE method has been widely used for long-range interaction N-body problems. We have developed a parallel TREE code for two-component classical plasmas with open boundary conditions and highly non-uniform charge distributions. The program efficiently handles millions of particles evolved over long relaxation times requiring millions of time steps. Appropriate domain decomposition and dynamic data management were employed, and large-scale parallel processing was achieved using an intermediate level of granularity of domain decomposition and ghost TREE communication. Even though the computational load is not fully distributed in fine grains, high parallel efficiency was achieved for ultracold plasma systems of charged particles. As an application, we performed simulations of an ultracold neutral plasma with a half million particles and a half million time steps. For the long temporal trajectories of relaxation between heavy ions and light electrons, large configurations of ultracold plasmas can now be investigated, which was not possible in past studies.

  16. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    Science.gov (United States)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  17. A post-modification approach to independent compo-nent analysis for resolution of overlapping GC/MS signals:from independent components to chemical components

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Independent component analysis (ICA) has demonstrated its power to extract mass spectra from over-lapping GC/MS signal. However, there is still a problem that mass spectra with negative peaks at some m/z will be obtained in the resolved results when there are overlapping peaks in the mass spectra of a mixture. Based on a detail theoretical analysis of the preconditions for ICA and the non-negative property of GC/MS signals, a post-modification based on chemical knowledge (PMBK) strategy is pro-posed to solve this problem. By both simulated and experimental GC/MS signals, it was proved that the PMBK strategy can improve the resolution effectively.

  18. Coordinated analysis of delayed sprites with high-speed images and remote electromagnetic fields

    Science.gov (United States)

    Li, J.; Cummer, S. A.; Lyons, W. A.; Nelson, T. E.

    2008-10-01

    Simultaneous measurements of high-altitude optical emissions and magnetic fields produced by sprite-associated lightning discharges enable a close examination of the link between low-altitude lightning processes and high-altitude sprite processes. We report results of the coordinated analysis of high-speed sprite video and wideband magnetic field measurements recorded simultaneously at Yucca Ridge Field Station and Duke University. From June to August 2005, sprites were detected following 67 lightning strokes, all of which had positive polarity. Our data showed that 46% of the 83 discrete sprite events in these sequences initiated more than 10 ms after the lightning return stroke, and we focus on these delayed sprites in this work. All delayed sprites were preceded by continuing current moments that averaged at least 11 kA km between the return stroke and sprites. The total lightning charge moment change at sprite initiation varied from 600 to 18,600 C km, and the minimum value to initiate long-delayed sprites ranged from 600 for 15 ms delay to 2000 C km for more than 120 ms delay. We numerically simulated electric fields at altitudes above these lightning discharges and found that the maximum normalized electric fields are essentially the same as fields that produce short-delayed sprites. Both estimated and simulation-predicted sprite initiation altitudes indicate that long-delayed sprites generally initiate around 5 km lower than short-delayed sprites. The simulation results also reveal that slow (5-20 ms) intensifications in continuing current can play a major role in initiating delayed sprites.

  19. Aerothermoelastic analysis of panel flutter based on the absolute nodal coordinate formulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Laith K., E-mail: laithabbass@yahoo.com; Rui, Xiaoting, E-mail: ruixt@163.com [Nanjing University of Science and Technology, Institute of Launch Dynamics (China); Marzocca, Piergiovanni, E-mail: pmarzocc@clarkson.edu [Clarkson University, Mechanical and Aeronautical Engineering Department (United States)

    2015-02-15

    Panels of reentry vehicles are subjected to a wide range of flow conditions during ascent and reentry phases. The flow can vary from subsonic continuum flow to hypersonic rarefied flow with wide ranging dynamic pressure and associated aerodynamic heating. One of the main design considerations is the assurance of safety against panel flutter under the flow conditions characterized by sever thermal environment. This paper deals with supersonic/hypersonic flutter analysis of panels exposed to a temperature field. A 3-D rectangular plate element of variable thickness based on absolute nodal coordinate formulation (ANCF) has been developed for the structural model and subjected to an assumed thermal profile that can result from any residual heat seeping into the metallic panels through the thermal protection systems. A continuum mechanics approach for the definition of the elastic forces within the finite element is considered. Both shear strain and transverse normal strain are taken into account. The aerodynamic force is evaluated by considering the first-order piston theory to linearize the potential flow and is coupled with the structural model to account for pressure loading. A provision is made to take into account the effect of arbitrary flow directions with respect to the panel edges. Aerothermoelastic equations using ANCF are derived and solved numerically. Values of critical dynamic pressure are obtained by a modal approach, in which the mode shapes are obtained by ANCF. A detailed parametric study is carried out to observe the effects of different temperature loadings, flow angle directions, and aspect ratios on the flutter boundary.

  20. Principal coordinate analysis of genotype × environment interaction for grain yield of bread wheat in the semi-arid regions

    Directory of Open Access Journals (Sweden)

    Sabaghnia Naser

    2013-01-01

    Full Text Available Multi-environmental trials have significant main effects and significant multiplicative genotype × environment (GE interaction effect. Principal coordinate analysis (PCOA offers a more appropriate statistical analysis to deal with such situations, compared to traditional statistical methods. Eighteen bread wheat genotypes were grown in four semi-arid regions over three year seasons to study the GE interaction and yield stability and obtained data on grain yield were analyzed using PCOA. Combined analysis of variance indicated that all of the studied effects including the main effects of genotype and environments as well as the GE interaction were highly significant. According to grand means and total mean yield, test environments were grouped to two main groups as high mean yield (H and low mean yield (L. There were five H test environments and six L test environments which analyzed in the sequential cycles. For each cycle, both scatter point diagram and minimum spanning tree plot were drawn. The identified most stable genotypes with dynamic stability concept and based on the minimum spanning tree plots and centroid distances were G1 (3310.2 kg ha-1 and G5 (3065.6 kg ha-1, and therefore could be recommended for unfavorable or poor conditions. Also, genotypes G7 (3047.2 kg ha-1 and G16 (3132.3 kg ha-1 were located several times in the vertex positions of high cycles according to the principal coordinates analysis. The principal coordinates analysis provided useful and interesting ways of investigating GE interaction of barley genotypes. Finally, the results of principal coordinates analysis in general confirmed the breeding value of the genotypes, obtained on the basis of the yield stability evaluation.

  1. Analysis of Femoral Components of Cemented Total Hip- Arthroplasty

    CERN Document Server

    Singh, Shantanu

    2014-01-01

    In cemented Total Hip Arthroplasty (THA), material chosen for femoral stem and cross section of stem itself, proved to be critical parameters for, stress distribution in the femoral components, interfacial stresses and micro movements. Titanium alloy (Ti6Al4V), when used as a material for femoral stem, recorded large displacement as compared to Chromium alloy (CoCrMo) stems. This large displacement in case of Ti6Al4V caused the stem to bend inside the cement mantle, thus destroying it. Thus, CoCrMo proved to be a better in cemented THA. Failure in THA may occur at cement-stem or cement-bone interface, thus interfacial stresses and micro movements were analysed in the present study. Comparison between trapezium and circular cross section showed that, femoral stem with trapezium cross section underwent lesser amount of sliding and debonding, at both interfaces, as compared to circular cross section. Moreover, trapezium cross section also generated lower peak stresses in femoral stem and cortical femur. The pres...

  2. Principal component analysis of gene frequencies of Chinese populations

    Institute of Scientific and Technical Information of China (English)

    肖春杰; L.L.Cavalli-Sforza; E.Minch; 杜若甫

    2000-01-01

    Principal components (PCs) were calculated based on gene frequencies of 130 alleles at 38 loci in Chinese populations, and geographic PC maps were constructed. The first PC map of the Han shows the genetic difference between Southern and Northern Mongoloids, while the second PC indicates the gene flow between Caucasoid and Mongoloids. The first PC map of the Chinese ethnic minorities is similar to that of the second PC map of the Han, while their second PC map is similar to the first PC map of the Han. When calculating PC with the gene frequency data from both the Han and ethnic minorities, the first and second PC maps most resemble those of the ethnic minorities alone. The third and fourth PC maps of Chinese populations may reflect historical events that allowed the expansion of the populations in the highly civilized regions. A clear-cut boundary between Southern and Northern Mongoloids in the synthetic map of the Chinese populations was observed in the zone of the Yangtze River. We suggest that the a

  3. Process Knowledge Discovery Using Sparse Principal Component Analysis

    DEFF Research Database (Denmark)

    Gao, Huihui; Gajjar, Shriram; Kulahci, Murat

    2016-01-01

    As the goals of ensuring process safety and energy efficiency become ever more challenging, engineers increasingly rely on data collected from such processes for informed decision making. During recent decades, extracting and interpreting valuable process information from large historical data sets...... are demonstrated through the Tennessee Eastman process simulation. The results indicate how knowledge and process insight can be discovered through a systematic analysis of sparse loadings....

  4. Blind component separation in wavelet space. Application to CMB analysis

    OpenAIRE

    Delabrouille, J.; J. -L. Starck; J.-F. Cardoso; Moudden, Y.

    2004-01-01

    It is a recurrent issue in astronomical data analysis that observations are unevenly sampled or incomplete maps with missing patches or intentionaly masked parts. In addition, many astrophysical emissions are non stationary processes over the sky. Hence spectral estimation using standard Fourier transforms is no longer reliable. Spectral matching ICA (SMICA) is a source separation method based on covariance matching in Fourier space which is successfully used for the separation of diffuse ast...

  5. Analysis of Pelvis-Thorax Coordination Patterns of Professional and Amateur Golfers during Golf Swing.

    Science.gov (United States)

    Sim, Taeyong; Yoo, Hakje; Choi, Ahnryul; Lee, Ki Young; Choi, Mun-Taek; Lee, Soeun; Mun, Joung Hwan

    2017-03-13

    The aim of this research was to quantify the coordination pattern between thorax and pelvis during a golf swing. The coordination patterns were calculated using vector coding technique, which had been applied to quantify the coordination changes in coupling angle (γ) between two different segments. For this, fifteen professional and fifteen amateur golfers who had no significant history of musculoskeletal injuries. There was no significant difference in coordination patterns between the two groups for rotation motion during backswing (p = 0.333). On the other hand, during the downswing phase, there were significant differences between professional and amateur groups in all motions (flexion/extension: professional [γ] = 187.8°, amateur [γ] = 167.4°; side bending: professional [γ] = 288.4°, amateur [γ] = 245.7°; rotation: professional [γ] = 232.0°, amateur [γ] = 229.5°). These results are expected to be a discriminating measure to assess complex coordination of golfers' trunk movements and preliminary study for interesting comparison by golf skilled levels.

  6. Comparison of Different Independent Component Analysis Algorithms for Output-Only Modal Analysis

    Directory of Open Access Journals (Sweden)

    Jianying Wang

    2016-01-01

    Full Text Available From the principle of independent component analysis (ICA and the uncertainty of amplitude, order, and number of source signals, this paper expounds the root reasons for modal energy uncertainty, identified order uncertainty, and modal missing in output-only modal analysis based on ICA methods. Aiming at the problem of lack of comparison and evaluation of different ICA algorithms for output-only modal analysis, this paper studies the different objective functions and optimization methods of ICA for output-only modal parameter identification. Simulation results on simply supported beam verify the effectiveness, robustness, and convergence rate of five different ICA algorithms for output-only modal parameters identification and show that maximization negentropy with quasi-Newton iterative of ICA method is more suitable for modal parameter identification.

  7. Principal component analysis of gene frequencies of Chinese populations

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Principal components (PCs) were calculated based on gene frequencies of 130 alleles at 38 loci in Chinese populations, and geographic PC maps were constructed. The first PC map of the Han shows the genetic difference between Southern and Northern Mongoloids, while the second PC indicates the gene flow between Caucasoid and Mongoloids. The first PC map of the Chinese ethnic minorities is similar to that of the second PC map of the Han, while their second PC map is similar to the first PC map of the Han. When calculating PC with the gene frequency data from both the Han and ethnic minorities, the first and second PC maps most resemble those of the ethnic minorities alone. The third and fourth PC maps of Chinese populations may reflect historical events that allowed the expansion of the populations in the highly civilized regions. A clear-cut boundary between Southern and Northern Mongoloids in the synthetic map of the Chinese populations was observed in the zone of the Yangtze River. We suggest that the ancestors of Southern and Northern Mongoloids had already separated before reaching Asia. The ancestors of the Southern Mongoloids may result from the initial expansion from Africa or the Middle East, via the south coast of Asia, toward Southeast Asia, and ultimately South China. Upon reaching the Yangtze River, they might even have crossed the river to occupy the nearby regions for a period of time. The ancestors of the Northern Mongoloids probably expanded from Africa via the Northern Pamirs, first went eastward, then towards the south to reach the Yangtze River. The expansion of the Northern Mongoloids toward the south of the Yangtze River happened only in the last 2 or 3 thousand years.

  8. C-IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids.

    Science.gov (United States)

    de Oliveira, Tiago E; Netz, Paulo A; Kremer, Kurt; Junghans, Christoph; Mukherji, Debashish

    2016-05-01

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C-IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. Additionally, we observe much faster convergence within C-IBI compared to IBI. To validate the robustness, we apply C-IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  9. Blind Component Separation in Wavelet Space: Application to CMB Analysis

    Directory of Open Access Journals (Sweden)

    J. Delabrouille

    2005-09-01

    Full Text Available It is a recurrent issue in astronomical data analysis that observations are incomplete maps with missing patches or intentionally masked parts. In addition, many astrophysical emissions are nonstationary processes over the sky. All these effects impair data processing techniques which work in the Fourier domain. Spectral matching ICA (SMICA is a source separation method based on spectral matching in Fourier space designed for the separation of diffuse astrophysical emissions in cosmic microwave background observations. This paper proposes an extension of SMICA to the wavelet domain and demonstrates the effectiveness of wavelet-based statistics for dealing with gaps in the data.

  10. Quantitative analysis of polymorphic mixtures of ranitidine hydrochloride by Raman spectroscopy and principal components analysis.

    Science.gov (United States)

    Pratiwi, Destari; Fawcett, J Paul; Gordon, Keith C; Rades, Thomas

    2002-11-01

    Ranitidine hydrochloride exists as two polymorphs, forms I and II, both of which are used to manufacture commercial tablets. Raman spectroscopy can be used to differentiate the two forms but univariate methods of quantitative analysis of one polymorph as an impurity in the other lack sensitivity. We have applied principal components analysis (PCA) of Raman spectra to binary mixtures of the two polymorphs and to binary mixtures prepared by adding one polymorph to powdered tablets of the other. Based on absorption measurements of seven spectral regions, it was found that >97% of the spectral variation was accounted for by three principal components. Quantitative calibration models generated by multiple linear regression predicted a detection limit and quantitation limit for either forms I or II in mixtures of the two of 0.6 and 1.8%, respectively. This study demonstrates that PCA of Raman spectroscopic data provides a sensitive method for the quantitative analysis of polymorphic impurities of drugs in commercial tablets with a quantitation limit of less than 2%.

  11. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  12. Application of principal-component analysis to the interpretation of brown coal properties

    Energy Technology Data Exchange (ETDEWEB)

    Tesch, S.; Otto, M. [TU Bergakademie, Freiberg (Germany). Institute for Analytical Chemistry

    1995-07-01

    The characterization of coal properties using principal-component analysis is described. The aim is to obtain correlations between a large number of chemical and technological parameters as well as FT-i.r. spectroscopic data. A database on 44 brown coals from different deposits was interpreted. After computation of the principal components, scatterplots and component-weight plots are presented for the first two or three principal components. The overlap of the component-weights plot and the scatterplot (biplot) shows how it is possible to classify brown coals by means of selected characteristics. 14 refs., 6 figs., 1 tab.

  13. Comparative analysis of wolbachia genomes reveals streamlining and divergence of minimalist two-component systems.

    Science.gov (United States)

    Christensen, Steen; Serbus, Laura Renee

    2015-03-24

    Two-component regulatory systems are commonly used by bacteria to coordinate intracellular responses with environmental cues. These systems are composed of functional protein pairs consisting of a sensor histidine kinase and cognate response regulator. In contrast to the well-studied Caulobacter crescentus system, which carries dozens of these pairs, the streamlined bacterial endosymbiont Wolbachia pipientis encodes only two pairs: CckA/CtrA and PleC/PleD. Here, we used bioinformatic tools to compare characterized two-component system relays from C. crescentus, the related Anaplasmataceae species Anaplasma phagocytophilum and Ehrlichia chaffeensis, and 12 sequenced Wolbachia strains. We found the core protein pairs and a subset of interacting partners to be highly conserved within Wolbachia and these other Anaplasmataceae. Genes involved in two-component signaling were positioned differently within the various Wolbachia genomes, whereas the local context of each gene was conserved. Unlike Anaplasma and Ehrlichia, Wolbachia two-component genes were more consistently found clustered with metabolic genes. The domain architecture and key functional residues standard for two-component system proteins were well-conserved in Wolbachia, although residues that specify cognate pairing diverged substantially from other Anaplasmataceae. These findings indicate that Wolbachia two-component signaling pairs share considerable functional overlap with other α-proteobacterial systems, whereas their divergence suggests the potential for regulatory differences and cross-talk.

  14. Tuber proteome comparison of five potato varieties by principal component analysis

    NARCIS (Netherlands)

    Mello, de Carla Souza; Dijk, Van Jeroen P.; Voorhuijzen, Marleen; Kok, Esther J.; Arisi, Ana Carolina Maisonnave

    2016-01-01

    BACKGROUND: Data analysis of omics data should be performed by multivariate analysis such as principal component analysis (PCA). The way data are clustered in PCA is of major importance to develop some classification systems based on multivariate analysis, such as soft independent modeling of cla

  15. Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst Data

    Indian Academy of Sciences (India)

    Zhao-Yang Peng; Wen-Shuai Liu

    2014-09-01

    We have carried out a Principal Component Analysis (PCA) of the temporal and spectral variables of 24 long-lag, wide-pulse gamma-ray bursts (GRBs) presented by Norris et al. (2005). Taking all eight temporal and spectral parameters into account, our analysis shows that four principal components are enough to describe the variation of the temporal and spectral data of long-lag bursts. In addition, the first-two principal components are dominated by the temporal variables while the third and fourth principal components are dominated by the spectral parameters.

  16. Unsupervised component analysis: PCA, POA and ICA data exploring - connecting the dots

    Science.gov (United States)

    Pereira, Jorge Costa; Azevedo, Julio Cesar R.; Knapik, Heloise G.; Burrows, Hugh Douglas

    2016-08-01

    Under controlled conditions, each compound presents a specific spectral activity. Based on this assumption, this article discusses Principal Component Analysis (PCA), Principal Object Analysis (POA) and Independent Component Analysis (ICA) algorithms and some decision criteria in order to obtain unequivocal information on the number of active spectral components present in a certain aquatic system. The POA algorithm was shown to be a very robust unsupervised object-oriented exploratory data analysis, proven to be successful in correctly determining the number of independent components present in a given spectral dataset. In this work we found that POA combined with ICA is a robust and accurate unsupervised method to retrieve maximal spectral information (the number of components, respective signal sources and their contributions).

  17. Competition analysis on the operating system market using principal component analysis

    Directory of Open Access Journals (Sweden)

    Brătucu, G.

    2011-01-01

    Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.

  18. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  19. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter;

    2016-01-01

    that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates.......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...

  20. Component Analysis of Bee Venom from lune to September

    Directory of Open Access Journals (Sweden)

    Ki Rok Kwon

    2007-06-01

    Full Text Available Objectives : The aim of this study was to observe variation of Bee Venom content from the collection period. Methods : Content analysis of Bee Venom was rendered using HPLC method by standard melittin Results : Analyzing melittin content using HPLC, 478.97mg/g at june , 493.89mg/g at july, 468.18mg/g at August and 482.15mg/g was containing in Bee Venom at september. So the change of melittin contents was no significance from June to September. Conclusion : Above these results, we concluded carefully that collecting time was not important factor for the quality control of Bee Venom, restricted the period from June to September.

  1. Blind component separation in wavelet space. Application to CMB analysis

    CERN Document Server

    Moudden, Y; Starck, J L; Delabrouille, J

    2004-01-01

    It is a recurrent issue in astronomical data analysis that observations are unevenly sampled or incomplete maps with missing patches or intentionaly masked parts. In addition, many astrophysical emissions are non stationary processes over the sky. Hence spectral estimation using standard Fourier transforms is no longer reliable. Spectral matching ICA (SMICA) is a source separation method based on covariance matching in Fourier space which is successfully used for the separation of diffuse astrophysical emissions in Cosmic Microwave Background observations. We show here that wavelets, which are standard tools in processing non stationary data, can profitably be used to extend SMICA. Among possible applications, it is shown that gaps in data are dealt with more conveniently and with better results using this extension, wSMICA, in place of the original SMICA. The performances of these two methods are compared on simulated CMB data sets, demonstrating the advantageous use of wavelets.

  2. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  3. Homogenization of soil properties map by Principal Component Analysis

    Science.gov (United States)

    Valverde Arias, Omar; Garrido, Alberto; Villeta, Maria; Tarquis, Ana Maria

    2016-04-01

    It is widely known that extreme climatic phenomena occur with more intensity and frequency. This fact has put more pressure over farming, becoming very important to implement agriculture risk management policies by governments and institutions. One of the main strategies is transfer risk by agriculture insurance. Agriculture insurance based in indexes has gained importance in the last decade. And consist in a comparison between measured index values with a defined threshold that triggers damage losses. However, based index insurance could not be based on an isolated measurement. It is necessary to be integrated in a complete monitoring system that uses many sources of information and tools. For example, index influence areas, crop production risk maps, crop yields, claim statistics, and so on. To establish index influence area is necessary to have a secondary information that show us homogeneous climatic and soil areas, which inside of each homogeneous classes, index measurements on crops of interest are going to be similar, and in this way reduce basis risk. But it is necessary an efficient method to accomplish this aim, to get homogeneous areas that not depends on only in expert criteria and that could be widely used, for this reason this study asses two conventional agricultural and geographic methods (control and climatic maps) based in expert criteria, and one classical statistical method of multi-factorial analysis (factorial map), all of them to homogenize soil and climatic characteristics. Resulting maps were validated by agricultural and spatial analysis, obtaining very good results in statistical method (Factorial map) that proves to be an efficient and accuracy method that could be used for similar porpoises.

  4. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  5. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; Moseley, S. H.; Porst, J.-P.; Porter, F. S.; Sadleir, J. E.; Smith, S. J.

    2016-07-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an ^{55}Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  6. Development of a database for prompt gamma-ray neutron activation analysis: Summary report of the third research coordination meeting

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, Richard M.; Firestone, Richard B.; Pavi, ???

    2003-04-01

    The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt Gamma-ray Neutron Activation Analysis are summarized in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003.

  7. Analysis of Large Flexible Body Deformation in Multibody Systems Using Absolute Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Dombrowski, Stefan von [Institute of Robotics and Mechatronics, German Aerospace Center (DLR) (Germany)], E-mail: stefan.von.dombrowski@dlr.de

    2002-11-15

    To consider large deformation problems in multibody system simulations a finite element approach, called absolute nodal coordinate.formulation,has been proposed. In this formulation absolute nodal coordinates and their material derivatives are applied to represent both deformation and rigid body motion. The choice of nodal variables allows a fully nonlinear representation of rigid body motion and can provide the exact rigid body inertia in the case of large rotations. The methodology is especially suited for but not limited to modeling of beams, cables and shells in multibody dynamics.This paper summarizes the absolute nodal coordinate formulation for a 3D Euler-Bernoulli beam model, in particular the definition of nodal variables, corresponding generalized elastic and inertia forces and equations of motion. The element stiffness matrix is a nonlinear function of the nodal variables even in the case of linearized strain/displacement relations. Nonlinear strain/displacement relations can be calculated from the global displacements using quadrature formulae.Computational examples are given which demonstrate the capabilities of the applied methodology. Consequences of the choice of shape.functions on the representation of internal forces are discussed. Linearized strain/displacement modeling is compared to the nonlinear approach and significant advantages of the latter, when using the absolute nodal coordinate formulation, are outlined.

  8. Feasibility Study and Cost Benefit Analysis of Conference Coordinating at the Naval Postgraduate School

    Science.gov (United States)

    2009-06-01

    www.indeed.com/salary?q1=conference+coordinator&l1=. 20 Labor Law (2009). Retrieved April 13, 2009, from Ca.gov: http://www.dir.ca.gov/dlse...BUPERSInstructions/. Labor Law (2009). Retrieved April 13, 2009, from Ca.gov: http://www.dir.ca.gov/dlse/faq_minimumwage.htm. MWR Utilization Rates (2009

  9. Final report of coordination and cooperation with the European Union on embankment failure analysis

    Science.gov (United States)

    There has been an emphasis in the European Union (EU) community on the investigation of extreme flood processes and the uncertainties related to these processes. Over a 3-year period, the EU and the U.S. dam safety community (1) coordinated their efforts and collected information needed to integrate...

  10. Performance analysis of coordination strategies in two-tier Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2016-08-11

    Large scale multi-tier Heterogeneous Networks (HetNets) are expected to ensure a consistent quality of service (QoS) in 5G systems. Such networks consist of a macro base station (BS) equipped with a large number of antennas and a dense overlay of small cells. The small cells could be deployed within the same coverage of the macro-cell BS, thereby causing high levels of inter-cell interference. In this regard, coordinated beamforming techniques are considered as a viable solution to counteract the arising interference. The goal of this work is to analyze the efficiency of coordinated beamforming techniques in mitigating both intra-cell and inter-cell interference. In particular, we consider the downlink of a Time-division duplexing (TDD) massive multiple-input-multiple-output (MIMO) tier-HetNet and analyze different beamforming schemes together with different degrees of coordination between the BSs. We exploit random matrix theory tools in order to provide, in explicit form, deterministic equivalents for the average achievable rates in the macro-cell and the micro-cells. We prove that our theoretical derivations allow us to draw some conclusions regarding the role played by coordination strategies in reducing the inter-cell interference. These findings are finally validated by a selection of some numerical results. © 2016 IEEE.

  11. Dissecting Online Control in Developmental Coordination Disorder: A Kinematic Analysis of Double-Step Reaching

    Science.gov (United States)

    Hyde, Christian; Wilson, Peter H.

    2011-01-01

    In a recent study, children with movement clumsiness (or Developmental Coordination Disorder--DCD) were shown to have difficulties making rapid online corrections when reaching, demonstrated by slower and less accurate movements to double-step targets (Hyde & Wilson, 2011). These results suggest that children with DCD have difficulty using…

  12. Investigation, Modeling, and Analysis of Integrated Metroplex Arrival and Departure Coordination Concepts

    Science.gov (United States)

    Clarke, John-Paul B.; Brooks, James; McClain, Evan; Paladhi, Anwesha Roy; Li, Leihong; Schleicher, David; Saraf, Aditya; Timar, Sebastian; Crisp, Don; Bertino, Jason; Laroza, Ryan; Cross, Carolyn

    2012-01-01

    This work involves the development of a concept that enhances integrated metroplex arrival and departure coordination, determines the temporal (the use of time separation for aircraft sharing the same airspace resources) and spatial (the use of different routes or vertical profiles for aircraft streams at any given time) impact of metroplex traffic coordination within the National Airspace System (NAS), and quantifies the benefits of the most desirable metroplex traffic coordination concept. Researching and developing metroplex concepts is addressed in this work that broadly applies across the range of airspace and airport demand characteristics envisioned for NextGen metroplex operations. The objective of this work is to investigate, formulate, develop models, and analyze an operational concept that mitigates issues specific to the metroplex or that takes advantage of unique characteristics of metroplex airports to improve efficiencies. The concept is an innovative approach allowing the NAS to mitigate metroplex interdependencies between airports, optimize metroplex arrival and departure coordination among airports, maximize metroplex airport throughput, minimize delay due to airport runway configuration changes, increase resiliency to disruptions, and increase the tolerance of the system to degrade gracefully under adverse conditions such as weather, traffic management initiatives, and delays in general.

  13. Analysis of breast cancer progression using principal component analysis and clustering

    Indian Academy of Sciences (India)

    G Alexe; G S Dalgin; S Ganesan; C DeLisi; G Bhanot

    2007-08-01

    We develop a new technique to analyse microarray data which uses a combination of principal components analysis and consensus ensemble -clustering to find robust clusters and gene markers in the data. We apply our method to a public microarray breast cancer dataset which has expression levels of genes in normal samples as well as in three pathological stages of disease; namely, atypical ductal hyperplasia or ADH, ductal carcinoma in situ or DCIS and invasive ductal carcinoma or IDC. Our method averages over clustering techniques and data perturbation to find stable, robust clusters and gene markers. We identify the clusters and their pathways with distinct subtypes of breast cancer (Luminal, Basal and Her2+). We confirm that the cancer phenotype develops early (in early hyperplasia or ADH stage) and find from our analysis that each subtype progresses from ADH to DCIS to IDC along its own specific pathway, as if each was a distinct disease.

  14. Invariant Manifolds and Collective Coordinates

    CERN Document Server

    Papenbrock, T

    2001-01-01

    We introduce suitable coordinate systems for interacting many-body systems with invariant manifolds. These are Cartesian in coordinate and momentum space and chosen such that several components are identically zero for motion on the invariant manifold. In this sense these coordinates are collective. We make a connection to Zickendraht's collective coordinates and present certain configurations of few-body systems where rotations and vibrations decouple from single-particle motion. These configurations do not depend on details of the interaction.

  15. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    Science.gov (United States)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  16. Analysis Components of the Digital Consumer Behavior in Romania

    Directory of Open Access Journals (Sweden)

    Cristian Bogdan Onete

    2016-08-01

    Full Text Available This article is investigating the Romanian consumer behavior in the context of the evolution of the online shopping. Given that online stores are a profitable business model in the area of electronic commerce and because the relationship between consumer digital Romania and its decision to purchase products or services on the Internet has not been sufficiently explored, this study aims to identify specific features of the new type of consumer and to examine the level of online shopping in Romania. Therefore a documentary study was carried out with statistic data regarding the volume and the number of transactions of the online shopping in Romania during 2010-2014, the type of products and services that Romanians are searching the Internet for and demographics of these people. In addition, to study more closely the online consumer behavior, and to interpret the detailed secondary data provided, an exploratory research was performed as a structured questionnaire with five closed questions on the distribution of individuals according to the gender category they belong (male or female; decision to purchase products / services in the virtual environment in the past year; the source of the goods / services purchased (Romanian or foreign sites; factors that have determined the consumers to buy products from foreign sites; categories of products purchased through online transactions from foreign merchants. The questionnaire was distributed electronically via Facebook social network users and the data collected was processed directly in the Facebook official app to create and interpret responses to surveys. The results of this research correlated with the official data reveals the following characteristics of the digital consumer in Romania: atypical European consumer, interested more in online purchases from abroad, influenced by the quality and price of the purchase. This paper assumed a careful analysis of the online acquisitions phenomenon and also

  17. Study on the dynamic response analysis for evaluating the effectiveness of base isolation for nuclear components

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Kazunari; Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-07-01

    Introduction of the base isolation technique into the seismic design of nuclear power plant components as well as buildings has been expected as one of the effective countermeasure to reduce the seismic force applied to components. A research program on the base isolation of nuclear components has been carried out at the Japan Atomic Energy Research Institute (JAERI) since 1991. A methodology and a computer code (EBISA: Equipment Base Insolation System Analysis) for evaluating the failure frequency of the nuclear component with the base isolation were developed. In addition, a test program, which is concerned with the above development, aiming at improvement of failure frequency analysis models in the code has been conducted since 1996 to investigate the dynamic behavior and to verify the effectiveness of component base isolation systems. In the failure frequency analysis, methodology for evaluating the actual dynamic responses of the nuclear components with the base isolation in detail has been examined. In the methodology, the actual responses are computed by considering the scatter in mechanical properties of rock masses, reactor building and components under many earthquake motions with various frequency characteristics. The failure frequency of component is computed as the conditional probability where the actual response exceeds the capacity of components. It is a very important in the above methodology to investigates the dynamic response analysis method for the ground, reactor building and nuclear components as well as the scattering factors in the dynamic analysis. This report describes the accuracy of the dynamic response analysis method and analysis models, and the influence of scatters in properties of rock masses and reactor building on the dynamic response. (author)

  18. Principal Component Analysis in the Spectral Analysis of the Dynamic Laser Speckle Patterns

    Science.gov (United States)

    Ribeiro, K. M.; Braga, R. A., Jr.; Horgan, G. W.; Ferreira, D. D.; Safadi, T.

    2014-02-01

    Dynamic laser speckle is a phenomenon that interprets an optical patterns formed by illuminating a surface under changes with coherent light. Therefore, the dynamic change of the speckle patterns caused by biological material is known as biospeckle. Usually, these patterns of optical interference evolving in time are analyzed by graphical or numerical methods, and the analysis in frequency domain has also been an option, however involving large computational requirements which demands new approaches to filter the images in time. Principal component analysis (PCA) works with the statistical decorrelation of data and it can be used as a data filtering. In this context, the present work evaluated the PCA technique to filter in time the data from the biospeckle images aiming the reduction of time computer consuming and improving the robustness of the filtering. It was used 64 images of biospeckle in time observed in a maize seed. The images were arranged in a data matrix and statistically uncorrelated by PCA technique, and the reconstructed signals were analyzed using the routine graphical and numerical methods to analyze the biospeckle. Results showed the potential of the PCA tool in filtering the dynamic laser speckle data, with the definition of markers of principal components related to the biological phenomena and with the advantage of fast computational processing.

  19. Continuous Fourier transform method and apparatus. [for the analysis of simultaneous analog signal components

    Science.gov (United States)

    Munoz, R. M. (Inventor)

    1974-01-01

    An input analog signal to be frequency analyzed is separated into N number of simultaneous analog signal components each identical to the original but delayed relative to the original by a successively larger time delay. The separated and delayed analog components are combined together in a suitable number of adders and attenuators in accordance with at least one component product of the continuous Fourier transform and analog signal matrices to separate the analog input signal into at least one of its continuous analog frequency components of bandwidth 1/N times the bandwidth of the original input signal. The original analog input signal can be reconstituted by combining the separate analog frequency components in accordance with the component products of the continuous Fourier transform and analog frequency component matrices. The continuous Fourier transformation is useful for spectrum analysis, filtering, transfer function synthesis, and communications.

  20. Analysis of Potential Energy Corridors Proposed by the Western Electricity Coordinating Council

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, James A.; Cantwell, Brian J.; Hlava, Kevin J.; Moore, H Robert; Orr, Andrew B.; Zvolanek, Emily A.

    2014-02-24

    This report, Analysis of Potential Energy Corridors Proposed by the Western Electricity Coordinating Council (WECC), was prepared by the Environmental Science Division of Argonne National Laboratory (Argonne). The intent of WECC’s work was to identify planning-level energy corridors that the Department of Energy (DOE) and its affiliates could study in greater detail. Argonne was tasked by DOE to analyze the WECC Proposed Energy Corridors in five topic areas for use in reviewing and revising existing corridors, as well as designating additional energy corridors in the 11 western states. In compliance with Section 368 of the Energy Policy Act of 2005 (EPAct), the Secretaries of Energy, Agriculture, and the Interior (Secretaries) published a Programmatic Environmental Impact Statement in 2008 to address the proposed designation of energy transport corridors on federal lands in the 11 western states. Subsequently, Records of Decision designating the corridors were issued in 2009 by the Bureau of Land Management (BLM) and the U.S. Forest Service (USFS). The 2012 settlement of a lawsuit, brought by The Wilderness Society and others against the United States, which identified environmental concerns for many of the corridors requires, among other things, periodic reviews of the corridors to assess the need for revisions, deletions, or additions. A 2013 Presidential Memorandum requires the Secretaries to undertake a continuing effort to identify and designate energy corridors. The WECC Proposed Energy Corridors and their analyses in this report provide key information for reviewing and revising existing corridors, as well as designating additional energy corridors in the 11 western states. Load centers and generation hubs identified in the WECC analysis, particularly as they reflect renewable energy development, would be useful in reviewing and potentially updating the designated Section 368 corridor network. Argonne used Geographic Information System (GIS) technology to

  1. Simultaneous Spectrophotometric Determination of Four Components including Acetaminophen by Taget Factor Analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    UV Spectrophotometric Target Factor Analysis (TFA) was used for the simultaneous determination of four components (acetaminophen, guuaifenesin, caffeine, Chlorphenamine maleate) in cough syrup. The computer program of TFA is based on VC++ language. The difficulty of overlapping of absorption spectra of four compounds was overcome by this procedure. The experimental results show that the average recovery of each component is all in the range from 98.9% to 106.8% and each component obtains satisfactory results without any pre-separation.

  2. A COMPREHENSIVE ANALYSIS OF THE MARKETING MIX COMPONENTS IMPACT ON THE QUALITY OF HEALTHCARE ACTIVITY

    OpenAIRE

    Ana Maria Bobeica

    2013-01-01

    The paper is based on a quantitative Analysis made on Romania’s healthcare market in order to quantify the impact of some key elements of the marketing mix like the Price Component, Placement, Human Resource, Product and Placement/Location and the Healthcare Quality of Services and to establish the connection between the components from the Healthcare Managers point of view.

  3. Harmonic Stability Analysis of Inverter-Fed Power Systems Using Component Connection Method

    DEFF Research Database (Denmark)

    Wang, Yanbo; Wang, Xiongfei; Blaabjerg, Frede

    2016-01-01

    This paper presents a Component Connection Method (CCM)-based harmonic stability analysis for ac power-electronic-fed power systems. In the approach, the system is partitioned as individual components, including the controllers of DG units, LC filters, network impedances, and power loads. They ar...

  4. Analysis of Coordination between the Public Service in Rural Areas and Socio-economic Development——A Case Study of Sichuan Province

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Taking Sichuan Province as an example,by using the overall evaluation function of Sichuan’s rural public service equalization development level and rural socio-economic development level,we conduct profound analysis on coordination between public service in Sichuan’s rural areas and socio-economic development from 2003 to 2008.The results show that the coordination between rural public service and socio-economic development in Sichuan Province is not high,and the equalization phenomenon of rural public service construction and socio-economic development is very prominent.The equalization development of public service in rural areas of Sichuan Province from 2003 to 2008 lags behind socio-economic development.The coordination between public service equalization system in rural areas of Sichuan Province and socio-economic development system abates continuously;the coordination between infrastructure and socio-economic development increases slowly;the coordination between education and socio-economic development declines sharply;the coordination between public culture and socio-economic development tends to decrease;the coordination between ecological environment construction and socio-economic development decreases continuously with great amplitude;the coordination between public health and socio-economic development decreases continuously;the coordination between science and technology and socio-economic development lingers at low level;the coordination between social security and employment,and socio-economic development increases in fluctuation,but with small amplitude.

  5. PRINCIPAL COMPONENT ANALYSIS AND CLUSTER ANALYSIS IN MULTIVARIATE ASSESSMENT OF WATER QUALITY

    Directory of Open Access Journals (Sweden)

    Elzbieta Radzka

    2017-03-01

    Full Text Available This paper deals with the use of multivariate methods in drinking water analysis. During a five-year project, from 2008 to 2012, selected chemical parameters in 11 water supply networks of the Siedlce County were studied. Throughout that period drinking water was of satisfactory quality, with only iron and manganese ions exceeding the limits (21 times and 12 times, respectively. In accordance with the results of cluster analysis, all water networks were put into three groups of different water quality. A high concentration of chlorides, sulphates, and manganese and a low concentration of copper and sodium was found in the water of Group 1 supply networks. The water in Group 2 had a high concentration of copper and sodium, and a low concentration of iron and sulphates. The water from Group 3 had a low concentration of chlorides and manganese, but a high concentration of fluorides. Using principal component analysis and cluster analysis, multivariate correlation between the studied parameters was determined, helping to put water supply networks into groups according to similar water quality.

  6. Independent component analysis-based source-level hyperlink analysis for two-person neuroscience studies

    Science.gov (United States)

    Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-02-01

    Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.

  7. Multiobjective Optimization of ELID Grinding Process Using Grey Relational Analysis Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Prabhu

    2014-06-01

    Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.

  8. Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  9. Component Analysis of Farming Systems With Relevance toFing~r ...

    African Journals Online (AJOL)

    is to preface component analysis results of a study of finger millet farming systems of South Western'. Tanzania. ... farming systems is a complex, multidimensional concept and its ..... ICRISAT (1974): In: Kraantz, B.A. and. Associates: Cropping ...

  10. Photogrammetric determinaiton of coordinates and deformations analysis of the accuracy of this method

    Energy Technology Data Exchange (ETDEWEB)

    Korablev, D.P.; Fomichev, L.V.; Trunin, A.P.

    1979-01-01

    The photogrammetric method for determining coordinates and deformation, developed at the VNIMI, is based on the analytic determination of the coordinates of points of the sample from measurements of a single stereogram or photograph. The measurements are closely controlled. Calculations are done on a computer. In addition to calculating the point coordinates and various deformation values (vertical and horizontal displacements, slopes, deflections etc.), the accuracy of the results is evaluated: the standard deviation per unit mass, the rms errors of the adjusted values of photo orientation on real photographs and analytical models. The following conclusions and assumptions were obtained on the basis of these studies: 1. When finding the deformation of a flat object, if the points of the last deformation show practically no displacement in the direction normal to the flat surface, then the photos should be taken separately and processed by analytical transformational or by the parallax method, measured from stereograms with a ''time basis''. 2. Then using convergent photography, there is a significant increase in the accuracy of the coordinate determination in the direction perpendicular to the photographic reference line, while there is almost no change in accuracy along the two other axes when compared to normal photography. Optimal symmetric-convergent exposure has a convergence angle of 60 to 120 /sup 0/ and a 1.5 to 2 ratio of the photographic reference line to the average distance to the object (along a normal to the reference). The stereogram of the symmetric-convergent photography for 100% coverage of the photographs encompasses an area two to three times larger than the usual steoegram. 3. The distribution of the reference points should be considered optimal when they bound the working area of the photograph (stereogram). When photographing volumes, they bound the object in the plan and side views.

  11. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  12. Time-domain ultra-wideband radar, sensor and components theory, analysis and design

    CERN Document Server

    Nguyen, Cam

    2014-01-01

    This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...

  13. An Introduction to Independent Component Analysis: InfoMax and FastICA algorithms

    Directory of Open Access Journals (Sweden)

    Dominique Gosselin

    2010-03-01

    Full Text Available This paper presents an introduction to independent component analysis (ICA. Unlike principal component analysis, which is based on the assumptions of uncorrelatedness and normality, ICA is rooted in the assumption of statistical independence. Foundations and basic knowledge necessary to understand the technique are provided hereafter. Also included is a short tutorial illustrating the implementation of two ICA algorithms (FastICA and InfoMax with the use of the Mathematica software.

  14. Small Target Extraction Based on Independent Component Analysis for Hyperspectral Imagery

    Institute of Scientific and Technical Information of China (English)

    LU Wei; YU Xuchu

    2006-01-01

    A small target detection approach based on independent component analysis for hyperspectral data is put forward. In this algorithm, firstly the fast independent component analysis(FICA) is used to collect target information hided in high-dimensional data and projects them into low-dimensional space.Secondly, the feature images are selected with kurtosis .At last, small targets are extracted with histogram image segmentation which has been labeled by skewness.

  15. Exemplary flexibility in the planning, coordination and execution at a structural component of a waste incinerator plant; Beispielhafte Flexibilitaet bei der Planung, Koordination und Ausfuehrung am Bauteil einer Abfallverbrennungsanlage

    Energy Technology Data Exchange (ETDEWEB)

    Athens, Karl-Juergen [GWI Bauunternehmung GmbH, Duesseldorf (Germany). Ingenieur- und Kraftwerksbau; Gebhardt, Heinz-Juergen [Schluchseewerk AG, Laufenburg (Baden) (Germany); Maier, Gunnar [Poeyry Deutschland GmbH, Hamburg (Germany)

    2013-03-01

    After deciding to build a waste incinerator plant, the building owner is faced to the question according to the method of awarding contracts. When awarding in main lots or functionally, the implementation of the possible bidders already into the process of planning and permission is very reasonable, because the approving authority had a significant influence on the realization. The construction partners are selected at a very early time with respect to the plants in Leudelange (Luxembourg), Delfzijl (The Netherlands) and Eisenhuettenstadt (Federal Republic of Germany). The authors of the contribution under consideration report on an exemplary flexibility in the planning, coordination and execution at a structural component of a waste incinerator plant.

  16. Analyzing the pupil response due to increased cognitive demand: an independent component analysis study.

    Science.gov (United States)

    Jainta, S; Baccino, T

    2010-07-01

    Pupillometry is used to indicate the relative extent of processing demands within or between tasks; however, this analysis is complicated by the fact that the pupil also responds to low-level aspects of visual input. First, we attempted to identify "principal" components that contribute to the pupil response by computing a principal component analysis (PCA) and second, to reveal "hidden" sources within the pupil response by calculating an independent component analysis (ICA). Pupil response data were collected while subjects read, added or multiplied numbers. A set of 3 factors/components were identified as resembling the individual pupil responses, but only one ICA component changed in concordance to the cognitive demand. This component alone accounted for about 50% of the variance of the pupil response during the most demanding task, i.e. the multiplication task. The highest impact of this factor was observed for 2000 to 300ms after task onset. Even though we did not attempt to answer the question of the functional background of the components 1 and 3, we speculated that component 2 might reflect the effort a subject engages to perform a task with greater difficulty.

  17. Using the Cluster Analysis and the Principal Component Analysis in Evaluating the Quality of a Destination

    Directory of Open Access Journals (Sweden)

    Ida Vajčnerová

    2016-01-01

    Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.

  18. Pattern recognition on X-ray fluorescence records from Copenhagen lake sediments using principal component analysis

    DEFF Research Database (Denmark)

    Schreiber, Norman; Garcia, Emanuel; Kroon, Aart

    2014-01-01

    , Fe, Rb) and characterized the content of minerogenic material in the sediment. In case of both cores, PC2 was a good descriptor emphasized as the contamination component. It showed strong linkages with heavy metals (Cu, Zn, Pb), disclosing changing heavy-metal contamination trends across different...... Component Analysis helped to trace geochemical patterns and temporal trends in lake sedimentation. The PCA models explained more than 80 % of the original variation in the datasets using only 2 or 3 principle components. The first principle component (PC1) was mostly associated with geogenic elements (Si, K...... depths. The sediments featured a temporal association with contaminant dominance. Lead contamination was superseded by zinc within the compound pattern which was linked to changing contamination sources over time. Principle Component Analysis was useful to visualize and interpret geochemical XRF data...

  19. Performance analysis of morphological component analysis (MCA) method for mammograms using some statistical features

    Science.gov (United States)

    Gardezi, Syed Jamal Safdar; Faye, Ibrahima; Kamel, Nidal; Eltoukhy, Mohamed Meselhy; Hussain, Muhammad

    2014-10-01

    Early detection of breast cancer helps reducing the mortality rates. Mammography is very useful tool in breast cancer detection. But it is very difficult to separate different morphological features in mammographic images. In this study, Morphological Component Analysis (MCA) method is used to extract different morphological aspects of mammographic images by effectively preserving the morphological characteristics of regions. MCA decomposes the mammogram into piecewise smooth part and the texture part using the Local Discrete Cosine Transform (LDCT) and Curvelet Transform via wrapping (CURVwrap). In this study, simple comparison in performance has been done using some statistical features for the original image versus the piecewise smooth part obtained from the MCA decomposition. The results show that MCA suppresses the structural noises and blood vessels from the mammogram and enhances the performance for mass detection.

  20. Analysis of whisker-toughened ceramic components - A design engineer's viewpoint

    Science.gov (United States)

    Duffy, Stephen F.; Manderscheid, Jane M.; Palko, Joseph L.

    1989-01-01

    The analysis of components fabricated from whisker-toughened ceramic matrix composites requires a departure from the 'factor-of-safety' design philosophy prevalent in the design of metallic structural component, which are more tolerant of flaws. A public-domain computer algorithm has been developed which, in conjunction with a general-purposed FEM program, can predict the fast-fracture reliability of a structural component under multiaxial loading conditions. The present version of the algorithm, designated 'Toughened Ceramics Analysis and Reliability Evaluation of Structures', accounts for material symmetry imposed by whisker orientation; the processes of crack deflection and crack pinning are also addressed.

  1. Mixture gas component concentration analysis based on support vector machine and infrared spectrum

    Institute of Scientific and Technical Information of China (English)

    Peng Bai; Junhua Liu

    2006-01-01

    @@ A novel quantitative analysis method of multi-component mixture gas concentration based on support vector machine (SVM) and spectroscopy is proposed. Through transformation of the kernel function, the seriously overlapped and nonlinear spectrum data are transformed in high-dimensional space, but the highdimensional data can be processed in the original space. Some factors, such as kernel function, range of the wavelength, and penalty coefficient, are discussed. This method is applied to the quantitative analysis of natural gas components concentration, and the component concentration maximal deviation is 2.28%.

  2. Fourier transform infrared spectroscopy quantitative analysis of SF6 partial discharge decomposition components.

    Science.gov (United States)

    Zhang, Xiaoxing; Liu, Heng; Ren, Jiangbo; Li, Jian; Li, Xin

    2015-02-05

    Gas-insulated switchgear (GIS) internal SF6 gas produces specific decomposition components under partial discharge (PD). By detecting these characteristic decomposition components, such information as the type and level of GIS internal insulation deterioration can be obtained effectively, and the status of GIS internal insulation can be evaluated. SF6 was selected as the background gas for Fourier transform infrared spectroscopy (FTIR) detection in this study. SOF2, SO2F2, SO2, and CO were selected as the characteristic decomposition components for system analysis. The standard infrared absorption spectroscopy of the four characteristic components was measured, the optimal absorption peaks were recorded and the corresponding absorption coefficient was calculated. Quantitative detection experiments on the four characteristic components were conducted. The volume fraction variation trend of four characteristic components at different PD time were analyzed. And under five different PD quantity, the quantitative relationships among gas production rate, PD time, and PD quantity were studied.

  3. Developmental Coordination Disorder, Sex, and Activity Deficit over Time: A Longitudinal Analysis of Participation Trajectories in Children with and without Coordination Difficulties

    Science.gov (United States)

    Cairney, John; Hay, John A.; Veldhuizen, Scott; Missiuna, Cheryl; Faught, Brent E.

    2010-01-01

    Aim: Children with developmental coordination disorder (DCD) are known to participate in active play less than typically developing children. However, it is not known whether the activity deficit between children with and without DCD widens or diminishes over time. Method: Data were obtained from a large, prospective cohort study of children…

  4. Two-component signal transduction pathways regulating growth and cell cycle progression in a bacterium: a system-level analysis.

    Directory of Open Access Journals (Sweden)

    Jeffrey M Skerker

    2005-10-01

    Full Text Available Two-component signal transduction systems, comprised of histidine kinases and their response regulator substrates, are the predominant means by which bacteria sense and respond to extracellular signals. These systems allow cells to adapt to prevailing conditions by modifying cellular physiology, including initiating programs of gene expression, catalyzing reactions, or modifying protein-protein interactions. These signaling pathways have also been demonstrated to play a role in coordinating bacterial cell cycle progression and development. Here we report a system-level investigation of two-component pathways in the model organism Caulobacter crescentus. First, by a comprehensive deletion analysis we show that at least 39 of the 106 two-component genes are required for cell cycle progression, growth, or morphogenesis. These include nine genes essential for growth or viability of the organism. We then use a systematic biochemical approach, called phosphotransfer profiling, to map the connectivity of histidine kinases and response regulators. Combining these genetic and biochemical approaches, we identify a new, highly conserved essential signaling pathway from the histidine kinase CenK to the response regulator CenR, which plays a critical role in controlling cell envelope biogenesis and structure. Depletion of either cenK or cenR leads to an unusual, severe blebbing of cell envelope material, whereas constitutive activation of the pathway compromises cell envelope integrity, resulting in cell lysis and death. We propose that the CenK-CenR pathway may be a suitable target for new antibiotic development, given previous successes in targeting the bacterial cell wall. Finally, the ability of our in vitro phosphotransfer profiling method to identify signaling pathways that operate in vivo takes advantage of an observation that histidine kinases are endowed with a global kinetic preference for their cognate response regulators. We propose that this

  5. Two-Component Signal Transduction Pathways Regulating Growth and Cell Cycle Progression in a Bacterium: A System-Level Analysis

    Science.gov (United States)

    Skerker, Jeffrey M; Prasol, Melanie S; Perchuk, Barrett S; Biondi, Emanuele G

    2005-01-01

    Two-component signal transduction systems, comprised of histidine kinases and their response regulator substrates, are the predominant means by which bacteria sense and respond to extracellular signals. These systems allow cells to adapt to prevailing conditions by modifying cellular physiology, including initiating programs of gene expression, catalyzing reactions, or modifying protein–protein interactions. These signaling pathways have also been demonstrated to play a role in coordinating bacterial cell cycle progression and development. Here we report a system-level investigation of two-component pathways in the model organism Caulobacter crescentus. First, by a comprehensive deletion analysis we show that at least 39 of the 106 two-component genes are required for cell cycle progression, growth, or morphogenesis. These include nine genes essential for growth or viability of the organism. We then use a systematic biochemical approach, called phosphotransfer profiling, to map the connectivity of histidine kinases and response regulators. Combining these genetic and biochemical approaches, we identify a new, highly conserved essential signaling pathway from the histidine kinase CenK to the response regulator CenR, which plays a critical role in controlling cell envelope biogenesis and structure. Depletion of either cenK or cenR leads to an unusual, severe blebbing of cell envelope material, whereas constitutive activation of the pathway compromises cell envelope integrity, resulting in cell lysis and death. We propose that the CenK–CenR pathway may be a suitable target for new antibiotic development, given previous successes in targeting the bacterial cell wall. Finally, the ability of our in vitro phosphotransfer profiling method to identify signaling pathways that operate in vivo takes advantage of an observation that histidine kinases are endowed with a global kinetic preference for their cognate response regulators. We propose that this system

  6. A general thermodynamic analysis and treatment of phases and components in the analysis of phase assemblages in multicomponent systems

    Institute of Scientific and Technical Information of China (English)

    HU JiaWen

    2012-01-01

    Systematic thermodynamic analysis reveals that an essential condition for the thermodynamically valid chemographic projections proposed by Greenwood is completely excessive.In other words,the phases or components from which the projection is made need not be pure,nor have their chemical potentials fixed over the whole chemographic diagram.To facilitate the analysis of phase assemblages in multicomponent systems,all phases and components in the system are divided into internal and external ones in terms of their thermodynamic features and roles,where the external phases are those common to all assemblages in the system,and the external components include excess components and the components whose chemical potentials (or relevant intensive properties of components) are used to define the thermodynamic conditions of the system.This general classification overcomes the difficulties and defects in the previous classifications,and is easier to use than the previous ones.According to the above classification,the phase rule is transformed into a new form.This leads to two findings:(1) the degree of freedom of the system under the given conditions is only determined by the internal components and phases; (2) different external phases can be identified conveniently according to the conditions of the system before knowing the real phase relations.Based on the above results,a simple but general approach is proposed for the treatment of phases and components:all external phases and components can be eliminated from the system without affecting the phase relations,where the external components can be eliminated by appropriate chemographic projections.The projections have no restriction on the states of the phases or the chemical potentials of components from which the projections are made.Th e present work can give a unified explanation of the previous treatments of phases and components in the analysis of phase assemblages under various specific conditions.It helps to avoid

  7. [Gait analysis in cerebral palsy patients--correlations between disorders of muscle coordination and gait abnormalities].

    Science.gov (United States)

    Güth, V; Abbink, F; Cloppenburg, E

    1985-01-01

    Electromyographic and gait investigations of 35 patients and 32 healthy persons were evaluated in order to get hints upon the origin of anomalous movements of the pelvis and the spine with following results: a. The amount of the spine motions of CP-patients is significantly greater than that of health persons. The motions of the pelvis of the patients also seems to be greater, but there is no significance. b. The average electromyographical activity of the hip abductors of CP-patients is not remarkably diminished, that of the hip adductors is not increased. We also found no signs of an atrophy of the hip adductors or a hypertrophy of the hip adductors. c. There is no correlation between the disturbances of coordination of the entire lower leg measured by electromyography and the amount of the pathologic motions. d. We found a distinct correlation between the disturbances especially of the hip ab- and adductors and the amount of the pathologic motions. This amount is the smaller, the worse is the coordination of these two groups of muscles. The topics a-c do not allow to find the origin of the Duchenne- and Trendelenburg limpings of CP-patients, topic d also gives explanation of this fact. In opposition we may suppose here that the bad timing of the coordination resp. the corresponding permanent electrical activity of the hip ab- and adductors has a stabilizing influence upon the pelvis and the spine. As the most probable origin of the increased Duchenne limping we not suppose the increased motions of the stance phase. Our following investigations will deal with this hypothesis.

  8. How Do Soccer Players Adjust Their Activity in Team Coordination? An Enactive Phenomenological Analysis

    Directory of Open Access Journals (Sweden)

    Vincent Gesbert

    2017-05-01

    Full Text Available This study examined how individual team members adjust their activity to the needs for collective behavior. To do so, we used an enactive phenomenological approach and explored how soccer players' lived experiences were linked to the active regulation of team coordination during eight offensive transition situations. These situations were defined by the shift from defensive to offensive play following a change in ball possession. We collected phenomenological data, which were processed in four steps. First, we reconstructed the diachronic and synchronic dynamics of the players' lived experiences across these situations in order to identify the units of their activity. Second, we connected each player's units of activity side-by-side in chronological order in order to identify the collective units. Each connection was viewed as a collective regulation mode corresponding to which and how individual units were linked at a given moment. Third, we clustered each collective unit using the related objectives within three modes of regulation—local (L, global (G, and mixed (M. Fourth, we compared the occurrences of these modes in relation to the observable key moments in the situations in order to identify typical patterns. The results indicated four patterns of collective regulation modes. Two distinct patterns were identified without ball possession: reorganize the play formation (G and M and adapt to the actions of putting pressure on the ball carrier (M. Once the ball was recovered, two additional patterns emerged: be available to get the ball out of the recovery zone (L and shoot for the goal (L and M. These results suggest that team coordination is a fluctuating phenomenon that can be described through the more or less predictable chaining between these patterns. They also highlight that team coordination is supported by several modes of regulation, including our proposal of a new mode of interpersonal regulation. We conclude that future research

  9. IEEE 802.11 Distributed Coordination Function:Enhancement and Analysis

    Institute of Scientific and Technical Information of China (English)

    WU HaiTao(邬海涛); LIN Yu(林宇); CHENG ShiDuan(程时端); PENG Yong(彭泳); LONG KePing(隆克平)

    2003-01-01

    IEEE 802.11 Medium Access Control (MAC) is proposed to support asynchronousand time bounded delivery of radio packets. Distributed Coordination Function (DCF), which uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) and binary slotted exponent ial backoff, is the basis of the 802.11 MAC. This paper proposes a throughput enhancement forDCF by adjusting the Contention Window (CW) setting scheme. Moreover, an analytical modelbased on Markov chain is introduced to compute the enhanced throughput. The accuracy of themodel and the enhancement of the proposed scheme are verified by elaborate simulations.

  10. Intralimb Coordination Patterns in Absent, Mild, and Severe Stages of Diabetic Neuropathy: Looking Beyond Kinematic Analysis of Gait Cycle.

    Directory of Open Access Journals (Sweden)

    Liu Chiao Yi

    Full Text Available Diabetes Mellitus progressively leads to impairments in stability and joint motion and might affect coordination patterns, mainly due to neuropathy. This study aims to describe changes in intralimb joint coordination in healthy individuals and patients with absent, mild and, severe stages of neuropathy.Forty-seven diabetic patients were classified into three groups of neuropathic severity by a fuzzy model: 18 without neuropathy (DIAB, 7 with mild neuropathy (MILD, and 22 with moderate to severe neuropathy (SVRE. Thirteen healthy subjects were included as controls (CTRL. Continuous relative phase (CRP was calculated at each instant of the gait cycle for each pair of lower limb joints. Analysis of Variance compared each frame of the CRP time series and its standard deviation among groups (α = 5%.For the ankle-hip CRP, the SVRE group presented increased variability at the propulsion phase and a distinct pattern at the propulsion and initial swing phases compared to the DIAB and CTRL groups. For the ankle-knee CRP, the 3 diabetic groups presented more anti-phase ratios than the CTRL group at the midstance, propulsion, and terminal swing phases, with decreased variability at the early stance phase. For the knee-hip CRP, the MILD group showed more in-phase ratio at the early stance and terminal swing phases and lower variability compared to all other groups. All diabetic groups were more in-phase at early the midstance phase (with lower variability than the control group.The low variability and coordination differences of the MILD group showed that gait coordination might be altered not only when frank evidence of neuropathy is present, but also when neuropathy is still incipient. The ankle-knee CRP at the initial swing phase showed distinct patterns for groups from all degrees of neuropathic severity and CTRLs. The ankle-hip CRP pattern distinguished the SVRE patients from other diabetic groups, particularly in the transitional phase from stance to

  11. Bimanual motor coordination in older adults is associated with increased functional brain connectivity--a graph-theoretical analysis.

    Directory of Open Access Journals (Sweden)

    Marcus H Heitger

    Full Text Available In bimanual coordination, older and younger adults activate a common cerebral network but the elderly also have additional activation in a secondary network of brain areas to master task performance. It remains unclear whether the functional connectivity within these primary and secondary motor networks differs between the old and the young and whether task difficulty modulates connectivity. We applied graph-theoretical network analysis (GTNA to task-driven fMRI data in 16 elderly and 16 young participants using a bimanual coordination task including in-phase and anti-phase flexion/extension wrist movements. Network nodes for the GTNA comprised task-relevant brain areas as defined by fMRI activation foci. The elderly matched the motor performance of the young but showed an increased functional connectivity in both networks across a wide range of connectivity metrics, i.e., higher mean connectivity degree, connection strength, network density and efficiency, together with shorter mean communication path length between the network nodes and also a lower betweenness centrality. More difficult movements showed an increased connectivity in both groups. The network connectivity of both groups had "small world" character. The present findings indicate (a that bimanual coordination in the aging brain is associated with a higher functional connectivity even between areas also activated in young adults, independently from task difficulty, and (b that adequate motor coordination in the context of task-driven bimanual control in older adults may not be solely due to additional neural recruitment but also to aging-related changes of functional relationships between brain regions.

  12. Wavelet phase analysis of two velocity components to infer the structure of interscale transfers in a turbulent boundary-layer

    Energy Technology Data Exchange (ETDEWEB)

    Keylock, Christopher J [Sheffield Fluid Mechanics Group and Department of Civil and Structural Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD (United Kingdom); Nishimura, Kouichi, E-mail: c.keylock@sheffield.ac.uk [Graduate School of Environmental Studies, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)

    2016-04-15

    Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements. (paper)

  13. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    Science.gov (United States)

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization.

  14. Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis

    Directory of Open Access Journals (Sweden)

    Shaoqian Qin

    2015-03-01

    Full Text Available Sparse component analysis (SCA has been widely used for blind source separation(BSS for many years. Recently, SCA has been applied to operational modal analysis (OMA, which is also known as output-only modal identification. This paper considers the sparsity of sources’ time-frequency (TF representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method.

  15. Sparse Component Analysis Using Time-Frequency Representations for Operational Modal Analysis

    Science.gov (United States)

    Qin, Shaoqian; Guo, Jie; Zhu, Changan

    2015-01-01

    Sparse component analysis (SCA) has been widely used for blind source separation(BSS) for many years. Recently, SCA has been applied to operational modal analysis (OMA), which is also known as output-only modal identification. This paper considers the sparsity of sources' time-frequency (TF) representation and proposes a new TF-domain SCA under the OMA framework. First, the measurements from the sensors are transformed to the TF domain to get a sparse representation. Then, single-source-points (SSPs) are detected to better reveal the hyperlines which correspond to the columns of the mixing matrix. The K-hyperline clustering algorithm is used to identify the direction vectors of the hyperlines and then the mixing matrix is calculated. Finally, basis pursuit de-noising technique is used to recover the modal responses, from which the modal parameters are computed. The proposed method is valid even if the number of active modes exceed the number of sensors. Numerical simulation and experimental verification demonstrate the good performance of the proposed method. PMID:25789492

  16. Performance assessment of air quality monitoring networks using principal component analysis and cluster analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei-Zhen [Department of Building and Construction, City University of Hong Kong (China); He, Hong-Di [Department of Building and Construction, City University of Hong Kong (China); Logistics Research Center, Shanghai Maritime University, Shanghai (China); Dong, Li-yun [Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai (China)

    2011-03-15

    This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO{sub 2}), respirable suspended particulates (RSP) and nitrogen dioxide (NO{sub 2}), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)

  17. Correcting waveform bias using principal component analysis: Applications in multicentre motion analysis studies.

    Science.gov (United States)

    Clouthier, Allison L; Bohm, Eric R; Rudan, John F; Shay, Barbara L; Rainbow, Michael J; Deluzio, Kevin J

    2017-01-01

    Multicentre studies are rare in three dimensional motion analyses due to challenges associated with combining waveform data from different centres. Principal component analysis (PCA) is a statistical technique that can be used to quantify variability in waveform data and identify group differences. A correction technique based on PCA is proposed that can be used in post processing to remove nuisance variation introduced by the differences between centres. Using this technique, the waveform bias that exists between the two datasets is corrected such that the means agree. No information is lost in the individual datasets, but the overall variability in the combined data is reduced. The correction is demonstrated on gait kinematics with synthesized crosstalk and on gait data from knee arthroplasty patients collected in two centres. The induced crosstalk was successfully removed from the knee joint angle data. In the second example, the removal of the nuisance variation due to the multicentre data collection allowed significant differences in implant type to be identified. This PCA-based technique can be used to correct for differences between waveform datasets in post processing and has the potential to enable multicentre motion analysis studies.

  18. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Gonzalez, A.; Esquivel, A.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Ap. 70-543, 04510 DF (Mexico); Canto, J.; Curiel, S. [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico, Ap. 70-264, 04510 DF (Mexico); Riera, A. [Departamento de Fisica e Ingenieria Nuclear, Escuela Universitaria de Ingenieria Tecnica Industrial de Barcelona, Universidad Politecnica de Cataluna, C. Comte Urgell 187, 08036, Barcelona (Spain); Beck, T. L., E-mail: ary@nucleares.unam.mx [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2012-03-15

    We present an analysis of H{alpha} spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  19. Hydrothermal Synthesis, Crystal Structure, Spectrum and Electrochemical Analysis of the Copper(Ⅱ) Coordination Polymer

    Institute of Scientific and Technical Information of China (English)

    LI Chang-Hong; LI Wei; LI Yu-Lin; KUANG Yun-Fei

    2012-01-01

    The three-dimensional framework copper(Ⅱ) coordination polymer with basic copper carbonate and 3-(pyridin-2-yl)-1,2,4-triazole has been hydrothermally synthesized. It crystallizes in monoclinic space group P21/c, with a = 1.20860(3), b = 1.29581(2), c = 1.67863(3) nm, β = 116.0280(2)°, C21H12Cu3N12, Mr = 623.05, V = 2.36230(9) nm3, Dc = 1.752 g/cm3, Z = 4, F(000) = 1236, GOOF = 1.037, the final R = 0.0408 and wR = 0.1141. Every unit cell contains three copper atoms and three 3-(pyridin-2-yl)-1,2,4-triazole ligands. Every central Cu(Ⅱ) ion is coordinated by four nitrogen atoms of the 3-(pyridin-2-yl)-1,2,4-triazole ligands, forming a distorted tetrahedron. The title complex exhibits an intense photoluminescence at room temperature with the maximum emission at 392 nm. The cyclic voltametric behavior of the complex shows that the electron transfer in electrolysis reaction is irreversible.

  20. Assessment of motor coordination and dexterity of six years old children: A psychometric analysis

    Directory of Open Access Journals (Sweden)

    Olívia Souza Agostini

    2014-06-01

    Full Text Available Motor coordination of six-year-old children was examined using the Assessment of Motor Coordination and Dexterity, AMCD (Avaliação da Coordenação e Destreza Motora - ACOORDEM, in order to verify test-retest reliability and investigate whether motor performance is influenced by gender, type of school and residence location. Eighty-five children were evaluated, and their parents and teachers completed questionnaires. For test-retest reliability, the AMCD was repeated with 10 children. Mann-Whitney and chi-square tests identified significant influence of sex, type of school and residence location in just a few of the test items. The test-retest reliability was moderate in the items performance, and good to excellent in the majority of the questionnaires' items. We conclude that some items should be revised and normative tables for the identification of motor delay could be created considering only the age variable. Future studies should continue the process of validating the AMCD instrument with the assessment of younger children.

  1. Vector analysis of bending waveguides by using a modified finite-difference method in a local cylindrical coordinate system.

    Science.gov (United States)

    Xiao, Jinbiao; Sun, Xiaohan

    2012-09-10

    A vector mode solver for bending waveguides by using a modified finite-difference (FD) method is developed in a local cylindrical coordinate system, where the perfectly matched layer absorbing boundary conditions are incorporated. Utilizing Taylor series expansion technique and continuity condition of the longitudinal field components, a standard matrix eigenvalue equation without the averaged index approximation approach for dealing with the discrete points neighboring the dielectric interfaces is obtained. Complex effective indexes and field distributions of leaky modes for a typical rib bending waveguide and a silicon wire bend are presented, and solutions accord well with those from the film mode matching method, which shows the validity and utility of the established method.

  2. Investigation on Spectral Structure of Gearbox Vibration Signals by Principal Component Analysis for Condition Monitoring Purposes

    Energy Technology Data Exchange (ETDEWEB)

    Zimroz, Radoslaw [Wroclaw University of Technology, Diagnostics and Vibro-Acoustics Science Laboratory (Poland); Bartkowiak, Anna, E-mail: radoslaw.zimroz@pwr.wroc.pl, E-mail: aba@ii.uni.wroc.pl [University of Wroclaw, Institute of Computer Science, Wroclaw (Poland)

    2011-07-19

    Spectral analysis is well-established analysis of vibrations used in diagnostics both in academia and industry. In general, one may identify components related to particular stages in the gearbox and analyze amplitudes of these components with a simple rule for decision-making: if amplitudes are increasing the condition becomes worse. However, usually one should analyze not single amplitude but at least several components, but: how to analyze them simultaneously? We have provided an example (case study) for planetary gearboxes in good and bad conditions (case B and case A). As diagnostic features we have used 15 amplitudes of spectral components related to fundamental planetary mesh frequency and its harmonics. Using Principal Component Analysis (PCA), it has been shown that amplitudes don't vary in the same way; change of condition affects not only amplitudes of all components in that sense, but also relation between them. We have investigated geometry of the data and it has been shown that the proportions of the explained total inertia of the three data sets ('good', 'bad' and mixed good/bad) are different. We claim that it may be a novel diagnostic approach to employ multidimensional analysis for accounting not only directly observed values but also interrelations both within and between the two groups of data. Different structure of the data is associated with different condition of the machines and such assumption is specified for the first time in the literature. Obviously it requires more studies.

  3. Identification of Counterfeit Alcoholic Beverages Using Cluster Analysis in Principal-Component Space

    Science.gov (United States)

    Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.

    2017-07-01

    A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.

  4. Recurrence Quantification Analysis and Principal Components in the Detection of Short Complex Signals

    CERN Document Server

    Zbilut, J P; Webber, C L

    1998-01-01

    Recurrence plots were introduced to help aid the detection of signals in complicated data series. This effort was furthered by the quantification of recurrence plot elements. We now demonstrate the utility of combining recurrence quantification analysis with principal components analysis to allow for a probabilistic evaluation for the presence of deterministic signals in relatively short data lengths.

  5. Visualizing solvent mediated phase transformation behavior of carbamazepine polymorphs by principal component analysis

    DEFF Research Database (Denmark)

    Tian, Fang; Rades, Thomas; Sandler, Niklas

    2008-01-01

    The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplifie...

  6. INDEPENDENT COMPONENT ANALYSIS (ICA) APPLIED TO LONG BUNCH BEAMS IN THE LOS ALAMOS PROTON STORAGE RING

    Energy Technology Data Exchange (ETDEWEB)

    Kolski, Jeffrey S. [Los Alamos National Laboratory; Macek, Robert J. [Los Alamos National Laboratory; McCrady, Rodney C. [Los Alamos National Laboratory; Pang, Xiaoying [Los Alamos National Laboratory

    2012-05-14

    Independent component analysis (ICA) is a powerful blind source separation (BSS) method. Compared to the typical BSS method, principal component analysis (PCA), which is the BSS foundation of the well known model independent analysis (MIA), ICA is more robust to noise, coupling, and nonlinearity. ICA of turn-by-turn beam position data has been used to measure the transverse betatron phase and amplitude functions, dispersion function, linear coupling, sextupole strength, and nonlinear beam dynamics. We apply ICA in a new way to slices along the bunch and discuss the source signals identified as betatron motion and longitudinal beam structure.

  7. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    Energy Technology Data Exchange (ETDEWEB)

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  8. The Application of Kernel Principal Component Analysis%核主成分法的应用

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    In this paper,principal component analysis method and kernel principal component analysis method are used to research tourism development of thirteen cities in Jiangsu Province in 2010. The result shows that the kernel principal component analysis result is more reasonable,and the reasons are analyzed. Lastly, by using statistics analysis, some suggestions about future tourism development of Jiangsu Province are put forward for some departments.%  分别利用主成分法和核主成分法,对2010年江苏省13个市的旅游业发展情况进行对比分析,发现核主成法分析的结果更加合理,并分析了原因,最后对江苏未来的旅游业发展提出了建议,供有关部门参考。

  9. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  10. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    Science.gov (United States)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  11. Analysis on Nutritional and Functional Components of Different Pueraria lobata Roots

    Institute of Scientific and Technical Information of China (English)

    Jinping; WU; Fengling; GUO; Xianqiang; SHAO; Yan; HU; Jianjun; ZHAO; Zhengming; QIU

    2015-01-01

    This paper made analysis and evaluation of nutritional components and functional components of different Pueraria lobata roots. Nutritional components mainly include water,ash content,fat,reducing sugar,starch and cellulose; functional components mainly include flavone and polyphenol. Pueraria lobata root No. 1 has highest ash content,flavone,and polyphenol but lowest fat,so it is suitable for using as medical Pueraria lobata root resource. Pueraria lobata root No. 5 has starch content as high as 64. 43%,and is recommended using as vegetable and processing into Pueraria lobata powder. Pueraria lobata root No. 5 has cellulose content as high as 17. 79% and is recommended processing into Pueraria lobata tablets. Through comparison of nutritional and functional components of different Pueraria lobata roots,it is intended to provide reference for variety selection,breeding,production and processing of Pueraria lobata roots.

  12. A Simple Method for Limiting Disclosure in Continuous Microdata Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Calviño Aida

    2017-03-01

    Full Text Available In this article we propose a simple and versatile method for limiting disclosure in continuous microdata based on Principal Component Analysis (PCA. Instead of perturbing the original variables, we propose to alter the principal components, as they contain the same information but are uncorrelated, which permits working on each component separately, reducing processing times. The number and weight of the perturbed components determine the level of protection and distortion of the masked data. The method provides preservation of the mean vector and the variance-covariance matrix. Furthermore, depending on the technique chosen to perturb the principal components, the proposed method can provide masked, hybrid or fully synthetic data sets. Some examples of application and comparison with other methods previously proposed in the literature (in terms of disclosure risk and data utility are also included.

  13. Advanced Residuals Analysis for Determining the Number of PARAFAC Components in Dissolved Organic Matter.

    Science.gov (United States)

    Cuss, Chad W; Guéguen, Céline; Andersson, Per; Porcelli, Don; Maximov, Trofim; Kutscher, Liselott

    2016-02-01

    Parallel factor analysis (PARAFAC) has facilitated an explosion in research connecting the fluorescence properties of dissolved organic matter (DOM) to its functions and biogeochemical cycling in natural and engineered systems. However, the validation of robust PARAFAC models using split-half analysis requires an oft unrealistically large number (hundreds to thousands) of excitation-emission matrices (EEMs), and models with too few components may not adequately describe differences between DOM. This study used self-organizing maps (SOM) and comparing changes in residuals with the effects of adding components to estimate the number of PARAFAC components in DOM from two data sets: MS (110 EEMs from nine leaf leachates and headwaters) and LR (64 EEMs from the Lena River). Clustering by SOM demonstrated that peaks clearly persisted in model residuals after validation by split-half analysis. Plotting the changes to residuals was an effective method for visualizing the removal of fluorophore-like fluorescence caused by increasing the number of PARAFAC components. Extracting additional PARAFAC components via residuals analysis increased the proportion of correctly identified size-fractionated leaf leachates from 56.0 ± 0.8 to 75.2 ± 0.9%, and from 51.7 ± 1.4 to 92.9 ± 0.0% for whole leachates. Model overfitting was assessed by considering the correlations between components, and their distributions amongst samples. Advanced residuals analysis improved the ability of PARAFAC to resolve the variation in DOM fluorescence, and presents an enhanced validation approach for assessing the number of components that can be used to supplement the potentially misleading results of split-half analysis.

  14. Spatial analysis of ecosystem production from coordinated in-situ and satellite observations over semi-arid East Asia

    Science.gov (United States)

    Jia, G.; Wang, H.; Zhang, A.

    2016-12-01

    Ecosystem production is a fundamental component of biogeochemical cycles and land-atmosphere interactions at various scales. Semi-arid ecosystems are key contributors to the global carbon cycle and may even dominate the inter-annual variability and decadal trends of the land carbon sink, as demonstrated by several recent studies. Over past years, major achievements have been made to estimate ecosystem productions with satellite data at global and regional scales. However, those estimates were often done with very sparse in-situ data, especially in semi-arid East Asia portion. To better estimate finer resolution primary and ecosystem productions at regional scales, localized field measurements and integration with state-of-art satellite data are necessary. In-situ measurements of green vegetation fractions and CO2 flux between land and atmosphere are critical for understanding regional land-atmosphere interactions and for validating satellite data. Here, we integrated multi-scale satellite data and eddy covariance flux measurements from a pilot experiment of coordinated observation with 24 participant field sites to estimate the gross primary production (GPP) and net ecosystem production (NEP) over semi-arid East Asia from site to regional scale at high temporal and spatial resolution. The coordination started with intensive instruments calibration and field survey based on common protocol. We calculated the footprint sizes and landscape heterogeneity over each site with fine resolution satellite data (Landsat and GF) and evaluated the contribution of vegetation patches to flux signals. The vegetation photosynthesis model was driven with MODIS derived albedo and EVI and coordinated flux measurements. Generally, the GPP in this region were higher in east and lower in west, with distinguished green spots over oasis and montane forests. The estimated annual GPP was 40% greater than MOD17 products. Further, we validated and corrected microwave (AMSR-E and AMSR2) derived

  15. Poisson Coordinates.

    Science.gov (United States)

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  16. Application of synthetic principal component analysis model to mine area farmland heavy metal pollution assessment

    Institute of Scientific and Technical Information of China (English)

    WANG Cong-lu; WU Chao; WANG Wei-jun

    2008-01-01

    Referring to GB5618-1995 about heavy metal pollution, and using statistical analysis SPSS, the major pollutants of mine area farmland heavy metal pollution were identified by variable clustering analysis. Assessment and classification were done to the mine area farmland heavy metal pollution situation by synthetic principal components analysis (PCA). The results show that variable clustering analysis is efficient to identify the principal components of mine area farmland heavy metal pollution. Sort and clustering were done to the synthetic principal components scores of soil sample, which is given by synthetic principal components analysis. Data structure of soil heavy metal contaminations, relationships and pollution level of different soil samples are discovered. The results of mine area farmland heavy metal pollution quality assessed and classified with synthetic component scores reflect the influence of both the major and compound heavy metal pol-lutants. Identification and assessment results of mine area farmland heavy metal pollution can provide reference and guide to propose control measures of mine area farmland heavy metal pollution and focus on the key treatment region.

  17. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  18. PROJECTION-PURSUIT BASED PRINCIPAL COMPONENT ANALYSIS: A LARGE SAMPLE THEORY

    Institute of Scientific and Technical Information of China (English)

    Jian ZHANG

    2006-01-01

    The principal component analysis (PCA) is one of the most celebrated methods in analysing multivariate data. An effort of extending PCA is projection pursuit (PP), a more general class of dimension-reduction techniques. However, the application of this extended procedure is often hampered by its complexity in computation and by lack of some appropriate theory. In this paper, by use of the empirical processes we established a large sample theory for the robust PP estimators of the principal components and dispersion matrix.

  19. Sensor Fault Detection, Isolation and Reconstruction Using Nonlinear Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    Mohamed-Faouzi Harkat; Salah Djelel; Noureddine Doghmane; Mohamed Benouaret

    2007-01-01

    State reconstruction approach is very useful for sensor fault isolation, reconstruction of faulty measurement and the determination of the number of components retained in the principal components analysis (PCA) model. An extension of this approach based on a Nonlinear PCA (NLPCA) model is described in this paper. The NLPCA model is obtained using five layer neural network.A simulation example is given to show the performances of the proposed approach.

  20. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.