WorldWideScience

Sample records for component factor analysis

  1. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  2. Towards automatic analysis of dynamic radionuclide studies using principal-components factor analysis

    International Nuclear Information System (INIS)

    Nigran, K.S.; Barber, D.C.

    1985-01-01

    A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)

  3. Influencing Factors of Catering and Food Service Industry Based on Principal Component Analysis

    OpenAIRE

    Zi Tang

    2014-01-01

    Scientific analysis of influencing factors is of great importance for the healthy development of catering and food service industry. This study attempts to present a set of critical indicators for evaluating the contribution of influencing factors to catering and food service industry in the particular context of Harbin City, Northeast China. Ten indicators that correlate closely with catering and food service industry were identified and performed by the principal component analysis method u...

  4. Clustering of metabolic and cardiovascular risk factors in the polycystic ovary syndrome: a principal component analysis.

    Science.gov (United States)

    Stuckey, Bronwyn G A; Opie, Nicole; Cussons, Andrea J; Watts, Gerald F; Burke, Valerie

    2014-08-01

    Polycystic ovary syndrome (PCOS) is a prevalent condition with heterogeneity of clinical features and cardiovascular risk factors that implies multiple aetiological factors and possible outcomes. To reduce a set of correlated variables to a smaller number of uncorrelated and interpretable factors that may delineate subgroups within PCOS or suggest pathogenetic mechanisms. We used principal component analysis (PCA) to examine the endocrine and cardiometabolic variables associated with PCOS defined by the National Institutes of Health (NIH) criteria. Data were retrieved from the database of a single clinical endocrinologist. We included women with PCOS (N = 378) who were not taking the oral contraceptive pill or other sex hormones, lipid lowering medication, metformin or other medication that could influence the variables of interest. PCA was performed retaining those factors with eigenvalues of at least 1.0. Varimax rotation was used to produce interpretable factors. We identified three principal components. In component 1, the dominant variables were homeostatic model assessment (HOMA) index, body mass index (BMI), high density lipoprotein (HDL) cholesterol and sex hormone binding globulin (SHBG); in component 2, systolic blood pressure, low density lipoprotein (LDL) cholesterol and triglycerides; in component 3, total testosterone and LH/FSH ratio. These components explained 37%, 13% and 11% of the variance in the PCOS cohort respectively. Multiple correlated variables from patients with PCOS can be reduced to three uncorrelated components characterised by insulin resistance, dyslipidaemia/hypertension or hyperandrogenaemia. Clustering of risk factors is consistent with different pathogenetic pathways within PCOS and/or differing cardiometabolic outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A feasibility study on age-related factors of wrist pulse using principal component analysis.

    Science.gov (United States)

    Jang-Han Bae; Young Ju Jeon; Sanghun Lee; Jaeuk U Kim

    2016-08-01

    Various analysis methods for examining wrist pulse characteristics are needed for accurate pulse diagnosis. In this feasibility study, principal component analysis (PCA) was performed to observe age-related factors of wrist pulse from various analysis parameters. Forty subjects in the age group of 20s and 40s were participated, and their wrist pulse signal and respiration signal were acquired with the pulse tonometric device. After pre-processing of the signals, twenty analysis parameters which have been regarded as values reflecting pulse characteristics were calculated and PCA was performed. As a results, we could reduce complex parameters to lower dimension and age-related factors of wrist pulse were observed by combining-new analysis parameter derived from PCA. These results demonstrate that PCA can be useful tool for analyzing wrist pulse signal.

  6. PRINCIPAL COMPONENT ANALYSIS OF FACTORS DETERMINING PHOSPHATE ROCK DISSOLUTION ON ACID SOILS

    Directory of Open Access Journals (Sweden)

    Yusdar Hilman

    2016-10-01

    Full Text Available Many of the agricultural soils in Indonesia are acidic and low in both total and available phosphorus which severely limits their potential for crops production. These problems can be corrected by application of chemical fertilizers. However, these fertilizers are expensive, and cheaper alternatives such as phosphate rock (PR have been considered. Several soil factors may influence the dissolution of PR in soils, including both chemical and physical properties. The study aimed to identify PR dissolution factors and evaluate their relative magnitude. The experiment was conducted in Soil Chemical Laboratory, Universiti Putra Malaysia and Indonesian Center for Agricultural Land Resources Research and Development from January to April 2002. The principal component analysis (PCA was used to characterize acid soils in an incubation system into a number of factors that may affect PR dissolution. Three major factors selected were soil texture, soil acidity, and fertilization. Using the scores of individual factors as independent variables, stepwise regression analysis was performed to derive a PR dissolution function. The factors influencing PR dissolution in order of importance were soil texture, soil acidity, then fertilization. Soil texture factors including clay content and organic C, and soil acidity factor such as P retention capacity interacted positively with P dissolution and promoted PR dissolution effectively. Soil texture factors, such as sand and silt content, soil acidity factors such as pH, and exchangeable Ca decreased PR dissolution.

  7. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  8. Common Factor Analysis Versus Principal Component Analysis: Choice for Symptom Cluster Research

    Directory of Open Access Journals (Sweden)

    Hee-Ju Kim, PhD, RN

    2008-03-01

    Conclusion: If the study purpose is to explain correlations among variables and to examine the structure of the data (this is usual for most cases in symptom cluster research, CFA provides a more accurate result. If the purpose of a study is to summarize data with a smaller number of variables, PCA is the choice. PCA can also be used as an initial step in CFA because it provides information regarding the maximum number and nature of factors. In using factor analysis for symptom cluster research, several issues need to be considered, including subjectivity of solution, sample size, symptom selection, and level of measure.

  9. Latent physiological factors of complex human diseases revealed by independent component analysis of clinarrays

    Directory of Open Access Journals (Sweden)

    Chen David P

    2010-10-01

    Full Text Available Abstract Background Diagnosis and treatment of patients in the clinical setting is often driven by known symptomatic factors that distinguish one particular condition from another. Treatment based on noticeable symptoms, however, is limited to the types of clinical biomarkers collected, and is prone to overlooking dysfunctions in physiological factors not easily evident to medical practitioners. We used a vector-based representation of patient clinical biomarkers, or clinarrays, to search for latent physiological factors that underlie human diseases directly from clinical laboratory data. Knowledge of these factors could be used to improve assessment of disease severity and help to refine strategies for diagnosis and monitoring disease progression. Results Applying Independent Component Analysis on clinarrays built from patient laboratory measurements revealed both known and novel concomitant physiological factors for asthma, types 1 and 2 diabetes, cystic fibrosis, and Duchenne muscular dystrophy. Serum sodium was found to be the most significant factor for both type 1 and type 2 diabetes, and was also significant in asthma. TSH3, a measure of thyroid function, and blood urea nitrogen, indicative of kidney function, were factors unique to type 1 diabetes respective to type 2 diabetes. Platelet count was significant across all the diseases analyzed. Conclusions The results demonstrate that large-scale analyses of clinical biomarkers using unsupervised methods can offer novel insights into the pathophysiological basis of human disease, and suggest novel clinical utility of established laboratory measurements.

  10. Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data.

    Science.gov (United States)

    Medina, José M; Díaz, José A; Vukusic, Pete

    2015-04-20

    Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.

  11. Using network component analysis to dissect regulatory networks mediated by transcription factors in yeast.

    Directory of Open Access Journals (Sweden)

    Chun Ye

    2009-03-01

    Full Text Available Understanding the relationship between genetic variation and gene expression is a central question in genetics. With the availability of data from high-throughput technologies such as ChIP-Chip, expression, and genotyping arrays, we can begin to not only identify associations but to understand how genetic variations perturb the underlying transcription regulatory networks to induce differential gene expression. In this study, we describe a simple model of transcription regulation where the expression of a gene is completely characterized by two properties: the concentrations and promoter affinities of active transcription factors. We devise a method that extends Network Component Analysis (NCA to determine how genetic variations in the form of single nucleotide polymorphisms (SNPs perturb these two properties. Applying our method to a segregating population of Saccharomyces cerevisiae, we found statistically significant examples of trans-acting SNPs located in regulatory hotspots that perturb transcription factor concentrations and affinities for target promoters to cause global differential expression and cis-acting genetic variations that perturb the promoter affinities of transcription factors on a single gene to cause local differential expression. Although many genetic variations linked to gene expressions have been identified, it is not clear how they perturb the underlying regulatory networks that govern gene expression. Our work begins to fill this void by showing that many genetic variations affect the concentrations of active transcription factors in a cell and their affinities for target promoters. Understanding the effects of these perturbations can help us to paint a more complete picture of the complex landscape of transcription regulation. The software package implementing the algorithms discussed in this work is available as a MATLAB package upon request.

  12. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  13. Clinical usefulness of physiological components obtained by factor analysis. Application to /sup 99m/Tc-DTPA renography

    Energy Technology Data Exchange (ETDEWEB)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in /sup 99m/Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: (1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. (2) The change of a sick condition could be morphologically evaluated as that of the functional image.

  14. Clustering of leptin and physical activity with components of metabolic syndrome in Iranian population: an exploratory factor analysis.

    Science.gov (United States)

    Esteghamati, Alireza; Zandieh, Ali; Khalilzadeh, Omid; Morteza, Afsaneh; Meysamie, Alipasha; Nakhjavani, Manouchehr; Gouya, Mohammad Mehdi

    2010-10-01

    Metabolic syndrome (MetS), manifested by insulin resistance, dyslipidemia, central obesity, and hypertension, is conceived to be associated with hyperleptinemia and physical activity. The aim of this study was to elucidate the factors underlying components of MetS and also to test the suitability of leptin and physical activity as additional components of this syndrome. Data of the individuals without history of diabetes mellitus, aged 25-64 years, from third national surveillance of risk factors of non-communicable diseases (SuRFNCD-2007), were analyzed. Performing factor analysis on waist circumference, homeostasis model assessment of insulin resistance, systolic blood pressure, triglycerides (TG) and high-density lipoprotein cholesterol (HDL-C) led to extraction of two factors which explained around 59.0% of the total variance in both genders. When TG and HDL-C were replaced by TG to HDL-C ratio, a single factor was obtained. In contrast to physical activity, addition of leptin was consistent with one-factor structure of MetS and improved the ability of suggested models to identify obesity (BMI≥30 kg/m2, Pphysical activity loaded on the first identified factor. Our study shows that one underlying factor structure of MetS is also plausible and the inclusion of leptin does not interfere with this structure. Further, this study suggests that physical activity influences MetS components via modulation of the main underlying pathophysiologic pathway of this syndrome.

  15. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  16. Principal component analysis for predicting transcription-factor binding motifs from array-derived data

    Directory of Open Access Journals (Sweden)

    Vincenti Matthew P

    2005-11-01

    Full Text Available Abstract Background The responses to interleukin 1 (IL-1 in human chondrocytes constitute a complex regulatory mechanism, where multiple transcription factors interact combinatorially to transcription-factor binding motifs (TFBMs. In order to select a critical set of TFBMs from genomic DNA information and an array-derived data, an efficient algorithm to solve a combinatorial optimization problem is required. Although computational approaches based on evolutionary algorithms are commonly employed, an analytical algorithm would be useful to predict TFBMs at nearly no computational cost and evaluate varying modelling conditions. Singular value decomposition (SVD is a powerful method to derive primary components of a given matrix. Applying SVD to a promoter matrix defined from regulatory DNA sequences, we derived a novel method to predict the critical set of TFBMs. Results The promoter matrix was defined to establish a quantitative relationship between the IL-1-driven mRNA alteration and genomic DNA sequences of the IL-1 responsive genes. The matrix was decomposed with SVD, and the effects of 8 potential TFBMs (5'-CAGGC-3', 5'-CGCCC-3', 5'-CCGCC-3', 5'-ATGGG-3', 5'-GGGAA-3', 5'-CGTCC-3', 5'-AAAGG-3', and 5'-ACCCA-3' were predicted from a pool of 512 random DNA sequences. The prediction included matches to the core binding motifs of biologically known TFBMs such as AP2, SP1, EGR1, KROX, GC-BOX, ABI4, ETF, E2F, SRF, STAT, IK-1, PPARγ, STAF, ROAZ, and NFκB, and their significance was evaluated numerically using Monte Carlo simulation and genetic algorithm. Conclusion The described SVD-based prediction is an analytical method to provide a set of potential TFBMs involved in transcriptional regulation. The results would be useful to evaluate analytically a contribution of individual DNA sequences.

  17. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  18. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  19. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  20. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  1. Geophysical Factor Resolving of Rainfall Mechanism for Super Typhoons by Using Multiple Spatiotemporal Components Analysis

    Science.gov (United States)

    Huang, Chien-Lin; Hsu, Nien-Sheng

    2016-04-01

    This study develops a novel methodology to resolve the geophysical cause of typhoon-induced rainfall considering diverse dynamic co-evolution at multiple spatiotemporal components. The multi-order hidden patterns of complex hydrological process in chaos are detected to understand the fundamental laws of rainfall mechanism. The discovered spatiotemporal features are utilized to develop a state-of-the-art descriptive statistical model for mechanism validation, modeling and further prediction during typhoons. The time series of hourly typhoon precipitation from different types of moving track, atmospheric field and landforms are respectively precede the signal analytical process to qualify each type of rainfall cause and to quantify the corresponding affected degree based on the measured geophysical atmospheric-hydrological variables. This study applies the developed methodology in Taiwan Island which is constituted by complex diverse landform formation. The identified driving-causes include: (1) cloud height to ground surface; (2) co-movement effect induced by typhoon wind field with monsoon; (3) stem capacity; (4) interaction between typhoon rain band and terrain; (5) structural intensity variance of typhoon; and (6) integrated cloudy density of rain band. Results show that: (1) for the central maximum wind speed exceeding 51 m/sec, Causes (1) and (3) are the primary ones to generate rainfall; (2) for the typhoon moving toward the direction of 155° to 175°, Cause (2) is the primary one; (3) for the direction of 90° to 155°, Cause (4) is the primary one; (4) for the typhoon passing through mountain chain which above 3500 m, Cause (5) is the primary one; and (5) for the moving speed lower than 18 km/hr, Cause (6) is the primary one. Besides, the multiple geophysical component-based precipitation modeling can achieve 81% of average accuracy and 0.732 of average correlation coefficient (CC) within average 46 hours of duration, that improve their predictability.

  2. Principal component analysis of socioeconomic factors and their association with malaria in children from the Ashanti Region, Ghana.

    Science.gov (United States)

    Krefis, Anne Caroline; Schwarz, Norbert Georg; Nkrumah, Bernard; Acquah, Samuel; Loag, Wibke; Sarpong, Nimako; Adu-Sarkodie, Yaw; Ranft, Ulrich; May, Jürgen

    2010-07-13

    The socioeconomic and sociodemographic situation are important components for the design and assessment of malaria control measures. In malaria endemic areas, however, valid classification of socioeconomic factors is difficult due to the lack of standardized tax and income data. The objective of this study was to quantify household socioeconomic levels using principal component analyses (PCA) to a set of indicator variables and to use a classification scheme for the multivariate analysis of children<15 years of age presented with and without malaria to an outpatient department of a rural hospital. In total, 1,496 children presenting to the hospital were examined for malaria parasites and interviewed with a standardized questionnaire. The information of eleven indicators of the family's housing situation was reduced by PCA to a socioeconomic score, which was then classified into three socioeconomic status (poor, average and rich). Their influence on the malaria occurrence was analysed together with malaria risk co-factors, such as sex, parent's educational and ethnic background, number of children living in a household, applied malaria protection measures, place of residence and age of the child and the mother. The multivariate regression analysis demonstrated that the proportion of children with malaria decreased with increasing socioeconomic status as classified by PCA (p<0.05). Other independent factors for malaria risk were the use of malaria protection measures (p<0.05), the place of residence (p<0.05), and the age of the child (p<0.05). The socioeconomic situation is significantly associated with malaria even in holoendemic rural areas where economic differences are not much pronounced. Valid classification of the socioeconomic level is crucial to be considered as confounder in intervention trials and in the planning of malaria control measures.

  3. Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis

    Directory of Open Access Journals (Sweden)

    Tarek eLajnef

    2015-07-01

    Full Text Available A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG signals are split into oscillatory (spindles and transient (K-complex components. This decomposition is conveniently achieved by applying morphological component analysis (MCA to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT. Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i the transient component to reveal K-complexes and (ii the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity versus FDR on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1, the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62% and 49.09% respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are

  4. Analysis of factors affecting baseline SF-36 Mental Component Summary in Adult Spinal Deformity and its impact on surgical outcomes.

    Science.gov (United States)

    Mmopelwa, Tiro; Ayhan, Selim; Yuksel, Selcen; Nabiyev, Vugar; Niyazi, Asli; Pellise, Ferran; Alanay, Ahmet; Sanchez Perez Grueso, Francisco Javier; Kleinstuck, Frank; Obeid, Ibrahim; Acaroglu, Emre

    2018-03-01

    To identify the factors that affect SF-36 mental component summary (MCS) in patients with adult spinal deformity (ASD) at the time of presentation, and to analyse the effect of SF-36 MCS on clinical outcomes in surgically treated patients. Prospectively collected data from a multicentric ASD database was analysed for baseline parameters. Then, the same database for surgically treated patients with a minimum of 1-year follow-up was analysed to see the effect of baseline SF-36 MCS on treatment results. A clinically useful SF-36 MCS was determined by ROC Curve analysis. A total of 229 patients with the baseline parameters were analysed. A strong correlation between SF-36 MCS and SRS-22, ODI, gender, and diagnosis were found (p baseline SF-36 MCS (p baseline SF-36 MCS in an ASD population are other HRQOL parameters such as SRS-22 and ODI as well as the baseline thoracic kyphosis and gender. This study has also demonstrated that baseline SF-36 MCS does not necessarily have any effect on the treatment results by surgery as assessed by SRS-22 or ODI. Level III, prognostic study. Copyright © 2018 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  5. 3-Way characterization of soils by Procrustes rotation, matrix-augmented principal components analysis and parallel factor analysis

    Czech Academy of Sciences Publication Activity Database

    Andrade, J.M.; Kubista, Mikael; Carlosena, A.; Prada, D.

    2007-01-01

    Roč. 603, č. 1 (2007), s. 20-29 ISSN 0003-2670 Institutional research plan: CEZ:AV0Z50520514 Keywords : PCA * heavy metals * soil Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.186, year: 2007

  6. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  7. Independent component analysis: recent advances

    OpenAIRE

    Hyv?rinen, Aapo

    2013-01-01

    Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...

  8. Factor structure underlying components of allostatic load.

    Directory of Open Access Journals (Sweden)

    Jeanne M McCaffery

    Full Text Available Allostatic load is a commonly used metric of health risk based on the hypothesis that recurrent exposure to environmental demands (e.g., stress engenders a progressive dysregulation of multiple physiological systems. Prominent indicators of response to environmental challenges, such as stress-related hormones, sympatho-vagal balance, or inflammatory cytokines, comprise primary allostatic mediators. Secondary mediators reflect ensuing biological alterations that accumulate over time and confer risk for clinical disease but overlap substantially with a second metric of health risk, the metabolic syndrome. Whether allostatic load mediators covary and thus warrant treatment as a unitary construct remains to be established and, in particular, the relation of allostatic load parameters to the metabolic syndrome requires elucidation. Here, we employ confirmatory factor analysis to test: 1 whether a single common factor underlies variation in physiological systems associated with allostatic load; and 2 whether allostatic load parameters continue to load on a single common factor if a second factor representing the metabolic syndrome is also modeled. Participants were 645 adults from Allegheny County, PA (30-54 years old, 82% non-Hispanic white, 52% female who were free of confounding medications. Model fitting supported a single, second-order factor underlying variance in the allostatic load components available in this study (metabolic, inflammatory and vagal measures. Further, this common factor reflecting covariation among allostatic load components persisted when a latent factor representing metabolic syndrome facets was conjointly modeled. Overall, this study provides novel evidence that the modeled allostatic load components do share common variance as hypothesized. Moreover, the common variance suggests the existence of statistical coherence above and beyond that attributable to the metabolic syndrome.

  9. Analysis of factors controlling soil phosphorus loss with surface runoff in Huihe National Nature Reserve by principal component and path analysis methods.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Bu, He; Wo, Qiang

    2018-01-01

    Phosphorus (P) loss with surface runoff accounts for the P input to and acceleration of eutrophication of the freshwater. Many studies have focused on factors affecting P loss with surface runoff from soils, but rarely on the relationship among these factors. In the present study, rainfall simulation on P loss with surface runoff was conducted in Huihe National Nature Reserve, in Hulunbeier grassland, China, and the relationships between P loss with surface runoff, soil properties, and rainfall conditions were examined. Principal component analysis and path analysis were used to analyze the direct and indirect effects on P loss with surface runoff. The results showed that P loss with surface runoff was closely correlated with soil electrical conductivity, soil pH, soil Olsen P, soil total nitrogen (TN), soil total phosphorus (TP), and soil organic carbon (SOC). The main driving factors which influenced P loss with surface runoff were soil TN, soil pH, soil Olsen P, and soil water content. Path analysis and determination coefficient analysis indicated that the standard multiple regression equation for P loss with surface runoff and each main factor was Y = 7.429 - 0.439 soil TN - 6.834 soil pH + 1.721 soil Olsen-P + 0.183 soil water content (r = 0.487, p runoff. The effect of physical and chemical properties of undisturbed soils on P loss with surface runoff was discussed, and the soil water content and soil Olsen P were strongly positive influences on the P loss with surface runoff.

  10. Shifted Independent Component Analysis

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2007-01-01

    Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carried...

  11. A principal components analysis of the factors effecting personal exposure to air pollution in urban commuters in Dublin, Ireland.

    Science.gov (United States)

    McNabola, Aonghus; Broderick, Brian M; Gill, Laurence W

    2009-10-01

    Principal component analysis was used to examine air pollution personal exposure data of four urban commuter transport modes for their interrelationships between pollutants and relationships with traffic and meteorological data. Air quality samples of PM2.5 and VOCs were recorded during peak traffic congestion for the car, bus, cyclist and pedestrian between January 2005 and June 2006 on a busy route in Dublin, Ireland. In total, 200 personal exposure samples were recorded each comprising 17 variables describing the personal exposure concentrations, meteorological conditions and traffic conditions. The data reduction technique, principal component analysis (PCA), was used to create weighted linear combinations of the data and these were subsequently examined for interrelationships between the many variables recorded. The results of the PCA found that personal exposure concentrations in non-motorised forms of transport were influenced to a higher degree by wind speed, whereas personal exposure concentrations in motorised forms of transport were influenced to a higher degree by traffic congestion. The findings of the investigation show that the most effective mechanisms of personal exposure reduction differ between motorised and non-motorised modes of commuter transport.

  12. Analysis of factors controlling sediment phosphorus flux potential of wetlands in Hulun Buir grassland by principal component and path analysis method.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Ye, Shengxing; Zheng, Zhirong

    2017-11-08

    Phosphorus (P) flux potential can predict the trend of phosphorus release from wetland sediments to water and provide scientific parameters for further monitoring and management for phosphorus flux from wetland sediments to overlying water. Many studies have focused on factors affecting sediment P flux potential in sediment-water interface, but rarely on the relationship among these factors. In the present study, experiment on sediment P flux potential in sediment-water interface was conducted in six wetlands in Hulun Buir grassland, China and the relationships among sediment P flux potential in sediment-water interface, sediment physical properties, and sediment chemical characteristics were examined. Principal component analysis and path analysis were used to discuss these data in correlation coefficient, direct, and indirect effects on sediment P flux potential in sediment-water interface. Results indicated that the major factors affecting sediment P flux potential in sediment-water interface were amount of organophosphate-degradation bacterium in sediment, Ca-P content, and total phosphorus concentrations. The factors of direct influence sediment P flux potential were sediment Ca-P content, Olsen-P content, SOC content, and sediment Al-P content. The indirect influence sediment P flux potential in sediment-water interface was sediment Olsen-P content, sediment SOC content, sediment Ca-P content, and sediment Al-P content. And the standard multiple regression describing the relationship between sediment P flux potential in sediment-water interface and its major effect factors was Y = 5.849 - 1.025X 1  - 1.995X 2  + 0.188X 3  - 0.282X 4 (r = 0.9298, p < 0.01, n = 96), where Y is sediment P flux potential in sediment-water interface, X 1 is sediment Ca-P content, X 2 is sediment Olsen-P content, X 3 is sediment SOC content, and X 4 is sediment Al-P content. Therefore, future research will focus on these sediment properties to analyze the

  13. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  14. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  15. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Zhang YJ

    2018-05-01

    Full Text Available Yuji Zhang,* Xiaoju Li,* Lu Mao, Mei Zhang, Ke Li, Yinxia Zheng, Wangfei Cui, Hongpo Yin, Yanli He, Mingxia Jing Department of Public Health, Shihezi University School of Medicine, Shihezi, Xinjiang, China *These authors contributed equally to this work Purpose: The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis.Patients and methods: A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ2-test and a binary logistic regression model.Results: This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications.Conclusion: Community management plays an important role in improving the patients’ medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers. Keywords: hypertension, medication adherence, factors, principal component analysis, community management, China

  16. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China.

    Science.gov (United States)

    Zhang, Yuji; Li, Xiaoju; Mao, Lu; Zhang, Mei; Li, Ke; Zheng, Yinxia; Cui, Wangfei; Yin, Hongpo; He, Yanli; Jing, Mingxia

    2018-01-01

    The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis. A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ 2 -test and a binary logistic regression model. This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications. Community management plays an important role in improving the patients' medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers.

  17. Principal component analysis of the main factors of line intensity enhancements observed in oscillating direct current plasma

    International Nuclear Information System (INIS)

    Stoiljkovic, Milovan M.; Pasti, Igor A.; Momcilovic, Milos D.; Savovic, Jelena J.; Pavlovic, Mirjana S.

    2010-01-01

    Enhancement of emission line intensities by induced oscillations of direct current (DC) arc plasma with continuous aerosol sample supply was investigated using multivariate statistics. Principal component analysis (PCA) was employed to evaluate enhancements of 34 atomic spectral lines belonging to 33 elements and 35 ionic spectral lines belonging to 23 elements. Correlation and classification of the elements were done not only by a single property such as the first ionization energy, but also by considering other relevant parameters. Special attention was paid to the influence of the oxide bond strength in an attempt to clarify/predict the enhancement effect. Energies of vaporization, atomization, and excitation were also considered in the analysis. In the case of atomic lines, the best correlation between the enhancements and first ionization energies was obtained as a negative correlation, with weak consistency in grouping of elements in score plots. Conversely, in the case of ionic lines, the best correlation of the enhancements with the sum of the first ionization energies and oxide bond energies was obtained as a positive correlation, with four distinctive groups of elements. The role of the gas-phase atom-oxide bond energy in the entire enhancement effect is underlined.

  18. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  19. Dynamics and spatio-temporal variability of environmental factors in Eastern Australia using functional principal component analysis

    Science.gov (United States)

    Szabo, J.K.; Fedriani, E.M.; Segovia-Gonzalez, M. M.; Astheimer, L.B.; Hooper, M.J.

    2010-01-01

    This paper introduces a new technique in ecology to analyze spatial and temporal variability in environmental variables. By using simple statistics, we explore the relations between abiotic and biotic variables that influence animal distributions. However, spatial and temporal variability in rainfall, a key variable in ecological studies, can cause difficulties to any basic model including time evolution. The study was of a landscape scale (three million square kilometers in eastern Australia), mainly over the period of 19982004. We simultaneously considered qualitative spatial (soil and habitat types) and quantitative temporal (rainfall) variables in a Geographical Information System environment. In addition to some techniques commonly used in ecology, we applied a new method, Functional Principal Component Analysis, which proved to be very suitable for this case, as it explained more than 97% of the total variance of the rainfall data, providing us with substitute variables that are easier to manage and are even able to explain rainfall patterns. The main variable came from a habitat classification that showed strong correlations with rainfall values and soil types. ?? 2010 World Scientific Publishing Company.

  20. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  1. Insight into the heterogeneous adsorption of humic acid fluorescent components on multi-walled carbon nanotubes by excitation-emission matrix and parallel factor analysis.

    Science.gov (United States)

    Yang, Chenghu; Liu, Yangzhi; Cen, Qiulin; Zhu, Yaxian; Zhang, Yong

    2018-02-01

    The heterogeneous adsorption behavior of commercial humic acid (HA) on pristine and functionalized multi-walled carbon nanotubes (MWCNTs) was investigated by fluorescence excitation-emission matrix and parallel factor (EEM- PARAFAC) analysis. The kinetics, isotherms, thermodynamics and mechanisms of adsorption of HA fluorescent components onto MWCNTs were the focus of the present study. Three humic-like fluorescent components were distinguished, including one carboxylic-like fluorophore C1 (λ ex /λ em = (250, 310) nm/428nm), and two phenolic-like fluorophores, C2 (λ ex /λ em = (300, 460) nm/552nm) and C3 (λ ex /λ em = (270, 375) nm/520nm). The Lagergren pseudo-second-order model can be used to describe the adsorption kinetics of the HA fluorescent components. In addition, both the Freundlich and Langmuir models can be suitably employed to describe the adsorption of the HA fluorescent components onto MWCNTs with significantly high correlation coefficients (R 2 > 0.94, Padsorption affinity (K d ) and nonlinear adsorption degree from the HA fluorescent components to MWCNTs was clearly observed. The adsorption mechanism suggested that the π-π electron donor-acceptor (EDA) interaction played an important role in the interaction between HA fluorescent components and the three MWCNTs. Furthermore, the values of the thermodynamic parameters, including the Gibbs free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°), showed that the adsorption of the HA fluorescent components on MWCNTs was spontaneous and exothermic. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Identification of factors most important for ammonia emission from fertilized soils for potato production using principal component analysis

    Science.gov (United States)

    Guodoong Liu; Yuncong Li; Kati W. Migliaccio; Ying Ouyang; Ashok K. Alva

    2011-01-01

    Ammonia (NH3) emissions from fertilized soils are a costly problem that is undermining agricultural and ecological sustainability worldwide. Ammonia emissions from crop production have been reliably documented in recent years. However, insufficient efforts have been made to determine the factors most influential in facilitating NH3 emissions. The goal of this study was...

  3. Functional Generalized Structured Component Analysis.

    Science.gov (United States)

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  4. The application of iterative transformation factor analysis to resolve multi-component EXAFS spectra of uranium(6) complexes with acetic acid as a function of pH

    International Nuclear Information System (INIS)

    Robberg, A.; Reich, T.

    2002-01-01

    Synchrotron-based EXAFS spectroscopy is a powerful technique to obtain structural information on radionuclide complexes in solution. Depending on the chemical conditions of the samples several radionuclide species can coexist in the solution as is often the case for environmentally related samples. All radionuclide species, which may have different near-neighbour environments, contribute to the measured EXAFS signal. In order to isolate the EXAFS spectra of the individual species (pure spectral components), it is necessary, in a first step, to measure a series of samples where their composition is changed by variation of one physico-chemical parameter (e.g. pH, concentration, etc.). For the spectral decomposition it is necessary that the EXAFS signal change as a function of the chosen physico-chemical parameter. In a second step, the series of EXAFS spectra is analysed with Eigen analysis and Iterative Transformation Factor Analysis (ITFA). As a result of the ITFA one obtains: a) for each sample the relative concentration of the structural distinguishable species and b) their corresponding pure spectral components. From the information obtained in a), one can construct a speciation diagram. The pure spectral components contain the structural information of the individual species, which can be extracted by conventional EXAFS analysis. To evaluate our ITFA algorithm for EXAFS analysis of mixtures, we prepared a series of eight solution samples of 0.05 M uranium(VI) and 1 M acetate (Ac) in the pH range of 0.1 to 4.5. From thermodynamic constants it is known that under these conditions up to four species can occur: uranyl hydrate, and the 1:1, 1:2 and 1:3 complexes of uranyl acetate. The uranium L III -edge EXAFS spectra were measured at room temperature in transmission mode at the Rossendorf Beamline (ROBL) at the ESRF. The average bond length between uranium and the equatorial oxygen atoms (O eq ) increases from 2.40 to 2.46 angstrom with increasing pH. This increase

  5. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  6. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  7. Multichannel Signals Reconstruction Based on Tunable Q-Factor Wavelet Transform-Morphological Component Analysis and Sparse Bayesian Iteration for Rotating Machines

    Directory of Open Access Journals (Sweden)

    Qing Li

    2018-04-01

    Full Text Available High-speed remote transmission and large-capacity data storage are difficult issues in signals acquisition of rotating machines condition monitoring. To address these concerns, a novel multichannel signals reconstruction approach based on tunable Q-factor wavelet transform-morphological component analysis (TQWT-MCA and sparse Bayesian iteration algorithm combined with step-impulse dictionary is proposed under the frame of compressed sensing (CS. To begin with, to prevent the periodical impulses loss and effectively separate periodical impulses from the external noise and additive interference components, the TQWT-MCA method is introduced to divide the raw vibration signal into low-resonance component (LRC, i.e., periodical impulses and high-resonance component (HRC, thus, the periodical impulses are preserved effectively. Then, according to the amplitude range of generated LRC, the step-impulse dictionary atom is designed to match the physical structure of periodical impulses. Furthermore, the periodical impulses and HRC are reconstructed by the sparse Bayesian iteration combined with step-impulse dictionary, respectively, finally, the final reconstructed raw signals are obtained by adding the LRC and HRC, meanwhile, the fidelity of the final reconstructed signals is tested by the envelop spectrum and error analysis, respectively. In this work, the proposed algorithm is applied to simulated signal and engineering multichannel signals of a gearbox with multiple faults. Experimental results demonstrate that the proposed approach significantly improves the reconstructive accuracy compared with the state-of-the-art methods such as non-convex Lq (q = 0.5 regularization, spatiotemporal sparse Bayesian learning (SSBL and L1-norm, etc. Additionally, the processing time, i.e., speed of storage and transmission has increased dramatically, more importantly, the fault characteristics of the gearbox with multiple faults are detected and saved, i.e., the

  8. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance : Simultaneous Component Analysis

    NARCIS (Netherlands)

    Mitra, V.; Govorukhina, N.; Zwanenburg, G.; Hoefsloot, H.; Westra, I.; Smilde, A.; Reijmers, T.; van der Zee, A.G.J.; Suits, F.; Bischoff, R.; Horvatovich, P.

    2016-01-01

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome

  9. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  10. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  12. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  13. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  14. Application of principal component and factor analyses in electron spectroscopy

    International Nuclear Information System (INIS)

    Siuda, R.; Balcerowska, G.

    1998-01-01

    Fundamentals of two methods, taken from multivariate analysis and known as principal component analysis (PCA) and factor analysis (FA), are presented. Both methods are well known in chemometrics. Since 1979, when application of the methods to electron spectroscopy was reported for the first time, they became to be more and more popular in different branches of electron spectroscopy. The paper presents examples of standard applications of the method of Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), and electron energy loss spectroscopy (EELS). Advantages one can take from application of the methods, their potentialities as well as their limitations are pointed out. (author)

  15. Component of the risk analysis

    International Nuclear Information System (INIS)

    Martinez, I.; Campon, G.

    2013-01-01

    The power point presentation reviews issues like analysis of risk (Codex), management risk, preliminary activities manager, relationship between government and industries, microbiological danger and communication of risk

  16. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  17. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  18. Tracking polychlorinated biphenyls (PCBs) congener patterns in Newark Bay surface sediment using principal component analysis (PCA) and positive matrix factorization (PMF).

    Science.gov (United States)

    Saba, Tarek; Su, Steave

    2013-09-15

    PCB congener data for Newark Bay surface sediments were analyzed using PCA and PMF, and relationships between the outcomes from these two techniques were explored. The PCA scores plot separated the Lower Passaic River Mouth samples from North Newark Bay, thus indicating dissimilarity. Although PCA was able to identify subareas in the Bay system with specific PCB congener patterns (e.g., higher chlorinated congeners in Elizabeth River), further conclusions reading potential PCB source profiles or potential upland source areas were not clear for the PCA scores plot. PMF identified five source factors, and explained the Bay sample congener profiles as a mix of these Factors. This PMF solution was equivalent to (1) defining an envelope that encompasses all samples on the PCA scores plot, (2) defining source factors that plot on that envelope, and (3) explaining the congener profile for each Bay sediment sample (inside the scores plot envelope) as a mix of factors. PMF analysis allowed identifying characteristic features in the source factor congener distributions that allowed tracking of source factors to shoreline areas where PCB inputs to the Bay may have originated. The combined analysis from PCA and PMF showed that direct discharges to the Bay are likely the dominant sources of PCBs to the sediment. Review of historical upland activities and regulatory files will be needed, in addition to the PCA and PMF analysis, to fully reconstruct the history of operations and PCB releases around the Newark Bay area that impacted the Bay sediment. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  20. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  1. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  2. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  3. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  4. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  5. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  6. In Silico Analysis of Gene Expression Network Components Underlying Pigmentation Phenotypes in the Python Identified Evolutionarily Conserved Clusters of Transcription Factor Binding Sites

    Directory of Open Access Journals (Sweden)

    Kristopher J. L. Irizarry

    2016-01-01

    Full Text Available Color variation provides the opportunity to investigate the genetic basis of evolution and selection. Reptiles are less studied than mammals. Comparative genomics approaches allow for knowledge gained in one species to be leveraged for use in another species. We describe a comparative vertebrate analysis of conserved regulatory modules in pythons aimed at assessing bioinformatics evidence that transcription factors important in mammalian pigmentation phenotypes may also be important in python pigmentation phenotypes. We identified 23 python orthologs of mammalian genes associated with variation in coat color phenotypes for which we assessed the extent of pairwise protein sequence identity between pythons and mouse, dog, horse, cow, chicken, anole lizard, and garter snake. We next identified a set of melanocyte/pigment associated transcription factors (CREB, FOXD3, LEF-1, MITF, POU3F2, and USF-1 that exhibit relatively conserved sequence similarity within their DNA binding regions across species based on orthologous alignments across multiple species. Finally, we identified 27 evolutionarily conserved clusters of transcription factor binding sites within ~200-nucleotide intervals of the 1500-nucleotide upstream regions of AIM1, DCT, MC1R, MITF, MLANA, OA1, PMEL, RAB27A, and TYR from Python bivittatus. Our results provide insight into pigment phenotypes in pythons.

  7. Mapping ash properties using principal components analysis

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  8. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  9. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  10. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  11. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  12. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  13. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  14. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  15. BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Andrea Giovanni Spelta

    2007-05-01

    Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.

  16. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  17. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  18. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  19. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  20. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  1. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  2. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  3. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  4. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  5. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  6. Analysis of spiral components in 16 galaxies

    International Nuclear Information System (INIS)

    Considere, S.; Athanassoula, E.

    1988-01-01

    A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies

  7. Structural analysis of NPP components and structures

    International Nuclear Information System (INIS)

    Saarenheimo, A.; Keinaenen, H.; Talja, H.

    1998-01-01

    Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)

  8. Reformulating Component Identification as Document Analysis Problem

    NARCIS (Netherlands)

    Gross, H.G.; Lormans, M.; Zhou, J.

    2007-01-01

    One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can

  9. What Governs Lorentz Factors of Jet Components in Blazars? Xinwu ...

    Indian Academy of Sciences (India)

    Abstract. We use a sample of radio-loud Active Galactic Nuclei. (AGNs) with measured black hole masses to explore the jet formation mechanisms in these sources. We find a significant correlation between black hole mass and the bulk Lorentz factor of the jet components for this sample, while no significant correlation is ...

  10. What Governs Lorentz Factors of Jet Components in Blazars?

    Indian Academy of Sciences (India)

    We use a sample of radio-loud Active Galactic Nuclei (AGNs) with measured black hole masses to explore the jet formation mechanisms in these sources. We find a significant correlation between black hole mass and the bulk Lorentz factor of the jet components for this sample, while no significant correlation is present ...

  11. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  12. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  13. Principal component analysis of normalized full spectrum mass spectrometry data in multiMS-toolbox: An effective tool to identify important factors for classification of different metabolic patterns and bacterial strains.

    Science.gov (United States)

    Cejnar, Pavel; Kuckova, Stepanka; Prochazka, Ales; Karamonova, Ludmila; Svobodova, Barbora

    2018-06-15

    Explorative statistical analysis of mass spectrometry data is still a time-consuming step. We analyzed critical factors for application of principal component analysis (PCA) in mass spectrometry and focused on two whole spectrum based normalization techniques and their application in the analysis of registered peak data and, in comparison, in full spectrum data analysis. We used this technique to identify different metabolic patterns in the bacterial culture of Cronobacter sakazakii, an important foodborne pathogen. Two software utilities, the ms-alone, a python-based utility for mass spectrometry data preprocessing and peak extraction, and the multiMS-toolbox, an R software tool for advanced peak registration and detailed explorative statistical analysis, were implemented. The bacterial culture of Cronobacter sakazakii was cultivated on Enterobacter sakazakii Isolation Agar, Blood Agar Base and Tryptone Soya Agar for 24 h and 48 h and applied by the smear method on an Autoflex speed MALDI-TOF mass spectrometer. For three tested cultivation media only two different metabolic patterns of Cronobacter sakazakii were identified using PCA applied on data normalized by two different normalization techniques. Results from matched peak data and subsequent detailed full spectrum analysis identified only two different metabolic patterns - a cultivation on Enterobacter sakazakii Isolation Agar showed significant differences to the cultivation on the other two tested media. The metabolic patterns for all tested cultivation media also proved the dependence on cultivation time. Both whole spectrum based normalization techniques together with the full spectrum PCA allow identification of important discriminative factors in experiments with several variable condition factors avoiding any problems with improper identification of peaks or emphasis on bellow threshold peak data. The amounts of processed data remain still manageable. Both implemented software utilities are available

  14. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  15. Component fragilities - data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1986-01-01

    As part of the component fragility research program sponsored by the US Nuclear Regulatory Commission, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment, by identifying, collecting and analyzing existing test data from various sources. BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices of various manufacturers and models. Through a cooperative agreement, BNL has also obtained test data from EPRI/ANCO. An analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. An extensive amount of additional fragility or high level test data exists. If completely collected and properly analyzed, the entire data bank is expected to greatly reduce the need for additional testing to establish fragility levels for most equipment

  16. Component fragilities. Data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists

  17. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  18. Research on Air Quality Evaluation based on Principal Component Analysis

    Science.gov (United States)

    Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan

    2018-01-01

    Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.

  19. WRKY Transcription Factors: Key Components in Abscisic Acid Signaling

    Science.gov (United States)

    2011-01-01

    networks that take inputs from numerous stimuli and that they are involved in mediating responses to numerous phytohormones including salicylic acid ... jasmonic acid , ABA and GA. These roles in multiple signalling pathways may in turn partly explain the pleiotropic effects commonly seen when TF genes are...Review article WRKY transcription factors: key components in abscisic acid signalling Deena L. Rushton1, Prateek Tripathi1, Roel C. Rabara1, Jun Lin1

  20. Stackable Form-Factor Peripheral Component Interconnect Device and Assembly

    Science.gov (United States)

    Somervill, Kevin M. (Inventor); Ng, Tak-kwong (Inventor); Torres-Pomales, Wilfredo (Inventor); Malekpour, Mahyar R. (Inventor)

    2013-01-01

    A stackable form-factor Peripheral Component Interconnect (PCI) device can be configured as a host controller or a master/target for use on a PCI assembly. PCI device may comprise a multiple-input switch coupled to a PCI bus, a multiplexor coupled to the switch, and a reconfigurable device coupled to one of the switch and multiplexor. The PCI device is configured to support functionality from power-up, and either control function or add-in card function.

  1. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  2. Long- and Short-Run Components of Factor Betas: Implications for Equity Pricing

    DEFF Research Database (Denmark)

    Asgharian, Hossein; Christiansen, Charlotte; Hou, Ai Jun

    We suggest a bivariate component GARCH model that simultaneously obtains factor betas’ long- and short-run components. We apply this new model to industry portfolios using market, small-minus-big, and high-minus-low portfolios as risk factors and find that the cross-sectional average and dispersion...... of the betas’ short-run component increase in bad states of the economy. Our analysis of the risk premium highlights the importance of decomposing risk across horizons: The risk premium associated with the short-run market beta is significantly positive. This is robust to the portfolio-set choice....

  3. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  4. Thermogravimetric analysis of combustible waste components

    DEFF Research Database (Denmark)

    Munther, Anette; Wu, Hao; Glarborg, Peter

    In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...

  5. Analysis of failed nuclear plant components

    Science.gov (United States)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  6. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1993-01-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  7. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  8. A radiographic analysis of implant component misfit.

    LENUS (Irish Health Repository)

    Sharkey, Seamus

    2011-07-01

    Radiographs are commonly used to assess the fit of implant components, but there is no clear agreement on the amount of misfit that can be detected by this method. This study investigated the effect of gap size and the relative angle at which a radiograph was taken on the detection of component misfit. Different types of implant connections (internal or external) and radiographic modalities (film or digital) were assessed.

  9. "Factor Analysis Using ""R"""

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2013-02-01

    Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.

  10. Lifetime analysis of fusion-reactor components

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1983-01-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO

  11. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  12. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  13. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  14. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  15. A component analysis of positive behaviour support plans.

    Science.gov (United States)

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  16. Components of WWER engineering factors for peaking factors: status and trends

    International Nuclear Information System (INIS)

    Tsyganov, S.V.

    2010-01-01

    One of the topics for discussion at special working group 'Elaboration of the methodology for calculating the core design engineering factors' is the problem of engineering factor components. The list of components corresponds to the phenomena that are taken into account with the engineering factor. It is supposed the better understanding of the influenced phenomena is important stage for developing unified methodology. This paper presents some brief overview of components of the engineering factor for VVER core peaking factors as they are in the Kurchatov Institute methodology. The evolution of some components to less conservative values is observed. Author makes some assumptions as for the further progress in components assessment. The engineering factors providing observance of design limits at normal operation, should cover, with the set probability, the uncertainty, connected with process of core design. For definition of the value of factors it is necessary to define influence of these uncertainties on the investigated parameter of the reactor. Practice consists in defining all possible sources of uncertainties, to estimate influence of each of them, and on their basis to define total influence of all uncertainties. The important stage of a technique of factor calculation is a definition of the list influencing uncertainties. It is obvious that all characteristics of VVER core are known with some uncertainty-owing to manufacturing tolerances, the measurement errors, etc. However essential influence on the parameters connected with safety, render only a part from them. At list formation those characteristics get out only, whose influence is essential to the corresponding parameter. (Author)

  17. Fractographic analysis of fractured dental implant components

    Directory of Open Access Journals (Sweden)

    Chih-Ling Chang

    2013-03-01

    Conclusion: To avoid implant fracture, certain underlying mechanical risk factors should be noted such as patients with a habit of bruxism, bridgework with a cantilever design, or two implants installed in a line in the posterior mandible.

  18. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  19. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  20. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  1. Columbia River Component Data Gap Analysis

    Energy Technology Data Exchange (ETDEWEB)

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  2. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  3. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  5. Mechanical factors affecting reliability of pressure components (fatigue, cracking)

    International Nuclear Information System (INIS)

    Lebey, J.; Garnier, C.; Roche, R.; Barrachin, B.

    1978-01-01

    The reliability of a pressure component can be seriously affected by the formation and development of cracks. The experimental studies presented in this paper are devoted to three different aspects of crack propagation phenomena which have been relatively little described. In close connection with safety analyses of PWR, the authors study the influence of the environment by carrying out fatigue tests with samples bathed in hot pressurized water. Ferritic, austenitic and Incolloy 800 steels were used and the results are presented in the form of fatigue curves in the oligocyclic region. The second part of the paper relates to crack initiation cirteria in ductile steels weakened by notches. The CT samples used make it possible to study almost all types of fracture (ductile, intermediate and brittle). The use of two criteria based on the load limit and on the toughness of the material constitutes a practical way of evaluating crack propagation conditions. A series of tests carried out on notched spherical vessels of different size shows that large vessels are relatively brittle; fast unstable fracture is observed as size increases. Crack growth rate in PWR primary circuits (3/6 steel) is studied on piping elements (0.25 scale) subjected to cyclic stress variations (285 0 C and with pressure varying between 1 and 160 bar in each cycle). By calculating the stress intensity factor, correlation with results obtained in the laboratory on CT samples is possible. (author)

  6. Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data

    Directory of Open Access Journals (Sweden)

    I. M. Ulbrich

    2009-05-01

    Full Text Available The organic aerosol (OA dataset from an Aerodyne Aerosol Mass Spectrometer (Q-AMS collected at the Pittsburgh Air Quality Study (PAQS in September 2002 was analyzed with Positive Matrix Factorization (PMF. Three components – hydrocarbon-like organic aerosol OA (HOA, a highly-oxygenated OA (OOA-1 that correlates well with sulfate, and a less-oxygenated, semi-volatile OA (OOA-2 that correlates well with nitrate and chloride – are identified and interpreted as primary combustion emissions, aged SOA, and semivolatile, less aged SOA, respectively. The complexity of interpreting the PMF solutions of unit mass resolution (UMR AMS data is illustrated by a detailed analysis of the solutions as a function of number of components and rotational forcing. A public web-based database of AMS spectra has been created to aid this type of analysis. Realistic synthetic data is also used to characterize the behavior of PMF for choosing the best number of factors, and evaluating the rotations of non-unique solutions. The ambient and synthetic data indicate that the variation of the PMF quality of fit parameter (Q, a normalized chi-squared metric vs. number of factors in the solution is useful to identify the minimum number of factors, but more detailed analysis and interpretation are needed to choose the best number of factors. The maximum value of the rotational matrix is not useful for determining the best number of factors. In synthetic datasets, factors are "split" into two or more components when solving for more factors than were used in the input. Elements of the "splitting" behavior are observed in solutions of real datasets with several factors. Significant structure remains in the residual of the real dataset after physically-meaningful factors have been assigned and an unrealistic number of factors would be required to explain the remaining variance. This residual structure appears to be due to variability in the spectra of the components

  7. Projection and analysis of nuclear components

    International Nuclear Information System (INIS)

    Heeschen, U.

    1980-01-01

    The classification and the types of analysis carried out in pipings for quality control and safety of nuclear power plants, are presented. The operation and emergency conditions with emphasis of possible simplifications of calculations are described. (author/M.C.K.) [pt

  8. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  9. Extrinsic Factors as Component Positions to Bone and Intrinsic Factors Affecting Postoperative Rotational Limb Alignment in Total Knee Arthroplasty.

    Science.gov (United States)

    Mochizuki, Tomoharu; Sato, Takashi; Tanifuji, Osamu; Watanabe, Satoshi; Kobayashi, Koichi; Endo, Naoto

    2018-02-13

    This study aimed to identify the factors affecting postoperative rotational limb alignment of the tibia relative to the femur. We hypothesized that not only component positions but also several intrinsic factors were associated with postoperative rotational limb alignment. This study included 99 knees (90 women and 9 men) with a mean age of 77 ± 6 years. A three-dimensional (3D) assessment system was applied under weight-bearing conditions to biplanar long-leg radiographs using 3D-to-2D image registration technique. The evaluation parameters were (1) component position; (2) preoperative and postoperative coronal, sagittal, and rotational limb alignment; (3) preoperative bony deformity, including femoral torsion, condylar twist angle, and tibial torsion; and (4) preoperative and postoperative range of motion (ROM). In multiple linear regression analysis using a stepwise procedure, postoperative rotational limb alignment was associated with the following: (1) rotation of the component position (tibia: β = 0.371, P intrinsic factors, such as preoperative rotational limb alignment, ROM, and tibial torsion, affected postoperative rotational limb alignment. On a premise of correct component positions, the intrinsic factors that can be controlled by surgeons should be taken care. In particular, ROM is necessary to be improved within the possible range to acquire better postoperative rotational limb alignment. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  11. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  12. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  13. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  14. Analysis of tangible and intangible hotel service quality components

    Directory of Open Access Journals (Sweden)

    Marić Dražen

    2016-01-01

    Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.

  15. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  16. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  17. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  18. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  19. Problems of stress analysis of fuelling machine head components

    International Nuclear Information System (INIS)

    Mathur, D.D.

    1975-01-01

    The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)

  20. Investigation of effective factors of transient thermal stress of the MONJU-System components

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Masaaki; Hirayama, Hiroshi; Kimura, Kimitaka; Jinbo, M. [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1999-03-01

    Transient thermal stress of each system Component in the fast breeder reactor is an uncertain factor on it's structural design. The temperature distribution in a system component changes over a wide range in time and in space. An unified evaluation technique of thermal, hydraulic, and structural analysis, in which includes thermal striping, temperature stratification, transient thermal stress and the integrity of the system components, is required for the optimum design of tho fast reactor plant. Thermal boundary conditions should be set up by both the transient thermal stress analysis and the structural integrity evaluation of each system component. The reasonable thermal boundary conditions for the design of the MONJU and a demonstration fast reactor, are investigated. The temperature distribution analysis models and the thermal boundary conditions on the Y-piece structural parts of each system component, such as reactor vessel, intermediate heat exchanger, primary main circulation pump, steam generator, superheater and upper structure of reactor core, are illustrated in the report. (M. Suetake)

  1. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  2. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  3. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  4. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  5. Analysis of European Union Economy in Terms of GDP Components

    Directory of Open Access Journals (Sweden)

    Simona VINEREAN

    2013-12-01

    Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.

  6. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  7. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  8. Dynamic Modal Analysis of Vertical Machining Centre Components

    OpenAIRE

    Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh

    2009-01-01

    The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...

  9. Variety of factors involved in selecting PV components

    International Nuclear Information System (INIS)

    Wiener, M.

    1996-01-01

    Although solar electricity has been used for over ten years in oil and gas applications, there still seems to be some confusion concerning the selection of components for solar electric systems. This paper reviews the design and selection of materials for solar arrays, batteries, and controls and cables. It also provides information on determining expected service life and overall system integration

  10. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  11. A Principal Component Analysis of 39 Scientific Impact Measures

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert

    2009-01-01

    Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078

  12. A principal component analysis of 39 scientific impact measures.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  13. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  14. Analytical static structure factor for a two-component system ...

    Indian Academy of Sciences (India)

    Marwan Al-Raeei

    2018-03-29

    Mar 29, 2018 ... be useful in studying biomolecular fluids and other soft matter fluids. Keywords. Ornstein–Zernike ... partial structure factor; isothermal compressibility; soft matter. PACS No. 05.20.Jj. 1. ..... computing. Users need to have ...

  15. System diagnostics using qualitative analysis and component functional classification

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures

  16. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  17. 7 CFR 1000.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing... advanced pricing factors. Class prices per hundredweight of milk containing 3.5 percent butterfat, component prices, and advanced pricing factors shall be as follows. The prices and pricing factors described...

  18. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  19. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  20. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  1. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  2. Numerical analysis of magnetoelastic coupled buckling of fusion reactor components

    International Nuclear Information System (INIS)

    Demachi, K.; Yoshida, Y.; Miya, K.

    1994-01-01

    For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated

  3. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  4. Multi-component separation and analysis of bat echolocation calls.

    Science.gov (United States)

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  5. Principal Components Analysis of Job Burnout and Coping ...

    African Journals Online (AJOL)

    The key component structure of job burnout were feelings of disgust, insomnia, headaches, weight loss or gain feeling of omniscient, pain of unexplained origin, hopelessness, agitation and workaholics, while the factor structure of coping strategies were development of self realistic picture, retaining hope, asking for help ...

  6. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  7. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  8. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  9. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  10. Soluble vascular endothelial growth factor in various blood transfusion components

    DEFF Research Database (Denmark)

    Nielsen, Hans Jørgen; Werther, K; Mynster, T

    1999-01-01

    of sVEGF was determined in nonfiltered and prestorage white cell-reduced whole blood (WB), buffy coat-depleted saline-adenine-glucose-mannitol (SAGM) blood, platelet-rich plasma (PRP), and buffy coat-derived platelet (BCP) pools obtained from volunteer, healthy blood donors. As a control, total content......-123) ng per mL in lysed cells. In SAGM blood, the median total sVEGF content was 25.3 (3.3-48.4) ng per unit in nonfiltered units and undetectable in white cell-reduced units. Median total sVEGF content was 29.2 (24.8-124.9) ng per unit in nonfiltered PRP and 28.7 (24.5-118.6) ng per unit in white cell......-reduced PRP. The sVEGF accumulated significantly in WB, SAGM blood, and BCP pools, depending on the storage time. CONCLUSION: The sVEGF (isotype 165) appears to be present in various blood transfusion components, depending on storage time....

  11. Automatic ECG analysis using principal component analysis and wavelet transformation

    OpenAIRE

    Khawaja, Antoun

    2007-01-01

    The main objective of this book is to analyse and detect small changes in ECG waves and complexes that indicate cardiac diseases and disorders. Detecting predisposition to Torsade de Points (TDP) by analysing the beat-to-beat variability in T wave morphology is the main core of this work. The second main topic is detecting small changes in QRS complex and predicting future QRS complexes of patients. Moreover, the last main topic is clustering similar ECG components in different groups.

  12. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  13. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  14. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    .) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...

  15. Chemistry of the aqueous medium - Determining factor of corrosion in carbon steel components of secondary circuit

    International Nuclear Information System (INIS)

    Radulescu, M.; Pirvan, I.; Dinu, A.; Velciu, L.

    2003-01-01

    The interplay of chemistry of aqueous medium and corrosion processes followed by deposition and/or release of corrosion products determines both formation and growth of superficial films as well as the kinetics of ion release from materials into the aqueous medium. Material corrosion in the secondary circuit of a NPP can be minimized by choosing the materials of the components and by a rigorous inspection of the chemistry of aqueous agent. The chemical inspection helps in minimizing: - the corrosion of the components immersed in feedwater and vapor and of Steam Generator components; - 'dirtying' of the systems particularly of the surfaces implied in heat transfer; - the amount of insoluble chemical species resulting in corrosion process and carried along the circuit; - the corrosion of secondary circuit components during revisions or outages. An important role among the chemical parameters of the fluids circulated in NPP tubing appears to be the pH. In CANDU reactors it must be kept within the range of 8.7 to 9.4 by treating the medium with volatile amines (morpholine and cyclohexylamine). A plot is presented giving the corrosion rate of carbon steels as a function of the pH of the medium. Besides, the oxygen concentration dissolved in the aqueous medium must be maintained under 5 μg per water kg. Other factors determining the corrosion rates are also discussed. The paper gives the results of the experiments done with various materials, solutions and analysis methods

  16. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  17. Analysis methods for structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Sievers, J.

    2004-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  18. 7 CFR 1131.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1131.53 Section 1131.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  19. 7 CFR 1005.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1005.53 Section 1005.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  20. 7 CFR 1124.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1124.53 Section 1124.53 Agriculture Regulations of the Department of Agriculture... Announcement of class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  1. 7 CFR 1126.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1126.53 Section 1126.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  2. 7 CFR 1032.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1032.53 Section 1032.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  3. 7 CFR 1030.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1030.53 Section 1030.53 Agriculture Regulations of the Department of Agriculture... of class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  4. 7 CFR 1033.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1033.53 Section 1033.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  5. 7 CFR 1001.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1001.53 Section 1001.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  6. 7 CFR 1007.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1007.53 Section 1007.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  7. 7 CFR 1006.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1006.53 Section 1006.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  8. WRKY transcription factors: key components in abscisic acid signalling.

    Science.gov (United States)

    Rushton, Deena L; Tripathi, Prateek; Rabara, Roel C; Lin, Jun; Ringler, Patricia; Boken, Ashley K; Langum, Tanner J; Smidt, Lucas; Boomsma, Darius D; Emme, Nicholas J; Chen, Xianfeng; Finer, John J; Shen, Qingxi J; Rushton, Paul J

    2012-01-01

    WRKY transcription factors (TFs) are key regulators of many plant processes, including the responses to biotic and abiotic stresses, senescence, seed dormancy and seed germination. For over 15 years, limited evidence has been available suggesting that WRKY TFs may play roles in regulating plant responses to the phytohormone abscisic acid (ABA), notably some WRKY TFs are ABA-inducible repressors of seed germination. However, the roles of WRKY TFs in other aspects of ABA signalling, and the mechanisms involved, have remained unclear. Recent significant progress in ABA research has now placed specific WRKY TFs firmly in ABA-responsive signalling pathways, where they act at multiple levels. In Arabidopsis, WRKY TFs appear to act downstream of at least two ABA receptors: the cytoplasmic PYR/PYL/RCAR-protein phosphatase 2C-ABA complex and the chloroplast envelope-located ABAR-ABA complex. In vivo and in vitro promoter-binding studies show that the target genes for WRKY TFs that are involved in ABA signalling include well-known ABA-responsive genes such as ABF2, ABF4, ABI4, ABI5, MYB2, DREB1a, DREB2a and RAB18. Additional well-characterized stress-inducible genes such as RD29A and COR47 are also found in signalling pathways downstream of WRKY TFs. These new insights also reveal that some WRKY TFs are positive regulators of ABA-mediated stomatal closure and hence drought responses. Conversely, many WRKY TFs are negative regulators of seed germination, and controlling seed germination appears a common function of a subset of WRKY TFs in flowering plants. Taken together, these new data demonstrate that WRKY TFs are key nodes in ABA-responsive signalling networks. © 2011 The Authors. Plant Biotechnology Journal © 2011 Society for Experimental Biology, Association of Applied Biologists and Blackwell Publishing Ltd.

  9. The analysis of multivariate group differences using common principal components

    NARCIS (Netherlands)

    Bechger, T.M.; Blanca, M.J.; Maris, G.

    2014-01-01

    Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences

  10. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  11. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  12. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....

  13. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  14. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  15. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  16. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  17. Reliability analysis and component functional allocations for the ESF multi-loop controller design

    International Nuclear Information System (INIS)

    Hur, Seop; Kim, D.H.; Choi, J.K.; Park, J.C.; Seong, S.H.; Lee, D.Y.

    2006-01-01

    This paper deals with the reliability analysis and component functional allocations to ensure the enhanced system reliability and availability. In the Engineered Safety Features, functionally dependent components are controlled by a multi-loop controller. The system reliability of the Engineered Safety Features-Component Control System, especially, the multi-loop controller which is changed comparing to the conventional controllers is an important factor for the Probability Safety Assessment in the nuclear field. To evaluate the multi-loop controller's failure rate of the k-out-of-m redundant system, the binomial process is used. In addition, the component functional allocation is performed to tolerate a single multi-loop controller failure without the loss of vital operation within the constraints of the piping and component configuration, and ensure that mechanically redundant components remain functional. (author)

  18. Thermal Analysis of Fermilab Mu2e Beamstop and Structural Analysis of Beamline Components

    Energy Technology Data Exchange (ETDEWEB)

    Narug, Colin S. [Northern Illinois U.

    2018-01-01

    The Mu2e project at Fermilab National Accelerator Laboratory aims to observe the unique conversion of muons to electrons. The success or failure of the experiment to observe this conversion will further the understanding of the standard model of physics. Using the particle accelerator, protons will be accelerated and sent to the Mu2e experiment, which will separate the muons from the beam. The muons will then be observed to determine their momentum and the particle interactions occur. At the end of the Detector Solenoid, the internal components will need to absorb the remaining particles of the experiment using polymer absorbers. Because the internal structure of the beamline is in a vacuum, the heat transfer mechanisms that can disperse the energy generated by the particle absorption is limited to conduction and radiation. To determine the extent that the absorbers will heat up over one year of operation, a transient thermal finite element analysis has been performed on the Muon Beam Stop. The levels of energy absorption were adjusted to determine the thermal limit for the current design. Structural finite element analysis has also been performed to determine the safety factors of the Axial Coupler, which connect and move segments of the beamline. The safety factor of the trunnion of the Instrument Feed Through Bulk Head has also been determined for when it is supporting the Muon Beam Stop. The results of the analysis further refine the design of the beamline components prior to testing, fabrication, and installation.

  19. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  20. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  1. Trimming of mammalian transcriptional networks using network component analysis

    Directory of Open Access Journals (Sweden)

    Liao James C

    2010-10-01

    Full Text Available Abstract Background Network Component Analysis (NCA has been used to deduce the activities of transcription factors (TFs from gene expression data and the TF-gene binding relationship. However, the TF-gene interaction varies in different environmental conditions and tissues, but such information is rarely available and cannot be predicted simply by motif analysis. Thus, it is beneficial to identify key TF-gene interactions under the experimental condition based on transcriptome data. Such information would be useful in identifying key regulatory pathways and gene markers of TFs in further studies. Results We developed an algorithm to trim network connectivity such that the important regulatory interactions between the TFs and the genes were retained and the regulatory signals were deduced. Theoretical studies demonstrated that the regulatory signals were accurately reconstructed even in the case where only three independent transcriptome datasets were available. At least 80% of the main target genes were correctly predicted in the extreme condition of high noise level and small number of datasets. Our algorithm was tested with transcriptome data taken from mice under rapamycin treatment. The initial network topology from the literature contains 70 TFs, 778 genes, and 1423 edges between the TFs and genes. Our method retained 1074 edges (i.e. 75% of the original edge number and identified 17 TFs as being significantly perturbed under the experimental condition. Twelve of these TFs are involved in MAPK signaling or myeloid leukemia pathways defined in the KEGG database, or are known to physically interact with each other. Additionally, four of these TFs, which are Hif1a, Cebpb, Nfkb1, and Atf1, are known targets of rapamycin. Furthermore, the trimmed network was able to predict Eno1 as an important target of Hif1a; this key interaction could not be detected without trimming the regulatory network. Conclusions The advantage of our new algorithm

  2. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  3. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  4. Components of Program for Analysis of Spectra and Their Testing

    Directory of Open Access Journals (Sweden)

    Ivan Taufer

    2013-11-01

    Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.

  5. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  6. Factor Analysis for Clustered Observations.

    Science.gov (United States)

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  7. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  8. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  9. Multi-spectrometer calibration transfer based on independent component analysis.

    Science.gov (United States)

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  10. Group-wise ANOVA simultaneous component analysis for designed omics experiments

    NARCIS (Netherlands)

    Saccenti, Edoardo; Smilde, Age K.; Camacho, José

    2018-01-01

    Introduction: Modern omics experiments pertain not only to the measurement of many variables but also follow complex experimental designs where many factors are manipulated at the same time. This data can be conveniently analyzed using multivariate tools like ANOVA-simultaneous component analysis

  11. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  12. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  13. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  14. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  15. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  16. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  17. 7 CFR 1000.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... advanced pricing factors. 1000.53 Section 1000.53 Agriculture Regulations of the Department of Agriculture..., component prices, and advanced pricing factors. (a) On or before the 5th day of the month, the market... administrator for each Federal milk marketing order shall announce the following prices and pricing factors for...

  18. Experimental comparison between total calibration factors and components calibration factors of reference dosemeters used in secondary standard laboratory dosemeters

    International Nuclear Information System (INIS)

    Silva, T.A. da.

    1981-06-01

    A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt

  19. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  20. Factor Structure of Indices of the Second Derivative of the Finger Photoplethysmogram with Metabolic Components and Other Cardiovascular Risk Indicators

    Directory of Open Access Journals (Sweden)

    Tomoyuki Kawada

    2013-02-01

    Full Text Available BackgroundThe second derivative of the finger photoplethysmogram (SDPTG is an indicator of arterial stiffness. The present study was conducted to clarify the factor structure of indices of the SDPTG in combination with components of the metabolic syndrome (MetS, to elucidate the significance of the SDPTG among various cardiovascular risk factors.MethodsThe SDPTG was determined in the second forefinger of the left hand in 1,055 male workers (mean age, 44.2±6.4 years. Among 4 waves of SDPTG components, the ratios of the height of the "a" wave to that of the "b" and "d" waves were expressed as b/a and d/a, and used as SDPTG indices for the analysis.ResultsPrincipal axis factoring analysis was conducted using age, SDPTG indices, components of MetS, and the serum levels of C-reactive protein (CRP and uric acid. Three factors were extracted, and the SDPTG indices were categorized in combination with age as the third factor. Metabolic components and the SDPTG indices were independently categorized. These three factors explained 44.4% of the total variation. Multiple logistic regression analysis revealed age, d/a, serum uric acid, serum CRP, and regular exercise as independent determinants of the risk of MetS. The odds ratios (95% confidence intervals were 1.08 (1.04 to 1.11, 0.10 (0.01 to 0.73, 1.24 (1.06 to 1.44, 3.59 (2.37 to 5.42, and 0.48 (0.28 to 0.82, respectively.ConclusionThe SDPTG indices were categorized in combination with age, and they differed in characteristics from components of MetS or inflammatory markers. In addition, this cross-sectional study also revealed decrease of the d/a as a risk factor for the development of MetS.

  1. The search for putative unifying genetic factors for components of the metabolic syndrome

    DEFF Research Database (Denmark)

    Sjögren, M; Lyssenko, V; Jonsson, Anna Elisabet

    2008-01-01

    The metabolic syndrome is a cluster of factors contributing to increased risk of cardiovascular disease and type 2 diabetes but unifying mechanisms have not been identified. Our aim was to study whether common variations in 17 genes previously associated with type 2 diabetes or components...... of the metabolic syndrome and variants in nine genes with inconsistent association with at least two components of the metabolic syndrome would also predict future development of components of the metabolic syndrome, individually or in combination....

  2. Dynamic analysis of the radiolysis of binary component system

    International Nuclear Information System (INIS)

    Katayama, M.; Trumbore, C.N.

    1975-01-01

    Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)

  3. Representation for dialect recognition using topographic independent component analysis

    Science.gov (United States)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  4. Probabilistic methods in nuclear power plant component ageing analysis

    International Nuclear Information System (INIS)

    Simola, K.

    1992-03-01

    The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience

  5. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  6. Integrative sparse principal component analysis of gene expression data.

    Science.gov (United States)

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  7. Factors behind international relocation and changes in production geography in the European automobile components industry

    OpenAIRE

    Jesús F. Lampón; Santiago Lago-Peñas

    2013-01-01

    This article analyses business strategies in the automobile sector to determine the key factors behind production relocation processes in automobile components suppliers. These factors help explain changes in production geography in the sector not only in terms of location advantages but also from a perspective of corporate strategies and decision-making mechanisms within firms. The results obtained from an empirical study in Spain during the period 2001-2008 show how the components sector h...

  8. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  9. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  10. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  11. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  12. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  13. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  14. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  15. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  16. A Principal Components Analysis of the Rathus Assertiveness Schedule.

    Science.gov (United States)

    Law, H. G.; And Others

    1979-01-01

    Investigated the adequacy of the Rathus Assertiveness Schedule (RAS) as a global measure of assertiveness. Analysis indicated that the RAS does not provide a unidimensional index of assertiveness, but rather measures a number of factors including situation-specific assertive behavior, aggressiveness, and a more general assertiveness. (Author)

  17. Heritable patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Feingold, Eleanor; Wang, Xiaojing; Tcuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Polk, Deborah E; Wendell, Steve; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2012-03-09

    Dental caries is the result of a complex interplay among environmental, behavioral, and genetic factors, with distinct patterns of decay likely due to specific etiologies. Therefore, global measures of decay, such as the DMFS index, may not be optimal for identifying risk factors that manifest as specific decay patterns, especially if the risk factors such as genetic susceptibility loci have small individual effects. We used two methods to extract patterns of decay from surface-level caries data in order to generate novel phenotypes with which to explore the genetic regulation of caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not by intra-oral examination for 1,068 participants aged 18 to 75 years from 664 biological families. Principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori surface classifications, were applied to our data. The three strongest caries patterns identified by PCA recaptured variation represented by DMFS index (correlation, r = 0.97), pit and fissure surface caries (r = 0.95), and smooth surface caries (r = 0.89). However, together, these three patterns explained only 37% of the variability in the data, indicating that a priori caries measures are insufficient for fully quantifying caries variation. In comparison, the first pattern identified by FA was strongly correlated with pit and fissure surface caries (r = 0.81), but other identified patterns, including a second pattern representing caries of the maxillary incisors, were not representative of any previously defined caries indices. Some patterns identified by PCA and FA were heritable (h(2) = 30-65%, p = 0.043-0.006), whereas other patterns were not, indicating both genetic and non-genetic etiologies of individual decay patterns. This study demonstrates the use of decay patterns as novel phenotypes to assist in understanding the multifactorial nature of dental caries.

  18. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  19. Demographic, socioeconomic, and behavioral factors affecting patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Polk, Deborah E; Feingold, Eleanor; Wang, Xiaojing; Cuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2013-08-01

    Dental caries of the permanent dentition is a multifactorial disease resulting from the complex interplay of endogenous and environmental risk factors. The disease is not easily quantitated due to the innumerable possible combinations of carious lesions across individual tooth surfaces of the permanent dentition. Global measures of decay, such as the DMFS index (which was developed for surveillance applications), may not be optimal for studying the epidemiology of dental caries because they ignore the distinct patterns of decay across the dentition. We hypothesize that specific risk factors may manifest their effects on specific tooth surfaces leading to patterns of decay that can be identified and studied. In this study, we utilized two statistical methods of extracting patterns of decay from surface-level caries data to create novel phenotypes with which to study the risk factors affecting dental caries. Intra-oral dental examinations were performed on 1068 participants aged 18-75 years to assess dental caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not and used as input for principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori knowledge of the patterns. Demographic (age, sex, birth year, race/ethnicity, and educational attainment), anthropometric (height, body mass index, waist circumference), endogenous (saliva flow), and environmental (tooth brushing frequency, home water source, and home water fluoride) risk factors were tested for association with the caries patterns identified by PCA and FA, as well as DMFS, for comparison. The ten strongest patterns (i.e. those that explain the most variation in the data set) extracted by PCA and FA were considered. The three strongest patterns identified by PCA reflected (i) global extent of decay (i.e. comparable to DMFS index), (ii) pit and fissure surface caries and (iii) smooth surface caries, respectively. The

  20. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  2. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  3. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  4. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  5. Principal component analysis of FDG PET in amnestic MCI

    International Nuclear Information System (INIS)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco

    2008-01-01

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  6. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  7. Using the Cluster Analysis and the Principal Component Analysis in Evaluating the Quality of a Destination

    Directory of Open Access Journals (Sweden)

    Ida Vajčnerová

    2016-01-01

    Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.

  8. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)

    1996-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  9. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  10. Fetal ECG extraction using independent component analysis by Jade approach

    Science.gov (United States)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  11. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)

    1997-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  12. Structure analysis of active components of traditional Chinese medicines

    DEFF Research Database (Denmark)

    Zhang, Wei; Sun, Qinglei; Liu, Jianhua

    2013-01-01

    Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... herbal sources by aqueous or alcoholic solvents. Therefore, it is difficult to correlate the pharmaceutical effect to a specific lead compound in the TCMs. A detailed analysis of various components in TCMs has been a great challenge for modern analytical techniques in recent decades. In this chapter...

  13. Advances in independent component analysis and learning machines

    CERN Document Server

    Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko

    2015-01-01

    In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t

  14. 7 CFR 1033.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1033.50 Section 1033.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  15. 7 CFR 1005.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1005.50 Section 1005.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  16. 7 CFR 1001.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1001.50 Section 1001.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  17. 7 CFR 1006.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1006.50 Section 1006.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  18. 7 CFR 1126.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1126.50 Section 1126.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  19. 7 CFR 1124.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1124.50 Section 1124.50 Agriculture Regulations of the Department of Agriculture (Continued... prices, and advanced pricing factors. See § 1000.50. ...

  20. 7 CFR 1030.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1030.50 Section 1030.50 Agriculture Regulations of the Department of Agriculture (Continued... prices, and advanced pricing factors. See § 1000.50. ...

  1. 7 CFR 1032.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1032.50 Section 1032.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  2. 7 CFR 1131.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1131.50 Section 1131.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  3. 7 CFR 1007.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1007.50 Section 1007.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  4. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  5. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  6. Fast grasping of unknown objects using principal component analysis

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  7. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    International Nuclear Information System (INIS)

    Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie

    2013-01-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes

  8. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  9. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  10. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    Energy Technology Data Exchange (ETDEWEB)

    Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  11. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    International Nuclear Information System (INIS)

    Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.

    2016-01-01

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information

  12. Failure cause analysis and improvement for magnetic component cabinet

    International Nuclear Information System (INIS)

    Ge Bing

    1999-01-01

    The magnetic component cabinet is an important thermal control device fitted on the nuclear power. Because it used a self-saturation amplifier as a primary component, the magnetic component cabinet has some boundness. For increasing the operation safety on the nuclear power, the author describes a new scheme. In order that the magnetic component cabinet can be replaced, the new type component cabinet is developed. Integrate circuit will replace the magnetic components of every function parts. The author has analyzed overall failure cause for magnetic component cabinet and adopted some measures

  13. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  14. Multi-component controllers in reactor physics optimality analysis

    International Nuclear Information System (INIS)

    Aldemir, T.

    1978-01-01

    An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux

  15. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  16. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  17. Improvement of retinal blood vessel detection using morphological component analysis.

    Science.gov (United States)

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Principal component analysis of 1/fα noise

    International Nuclear Information System (INIS)

    Gao, J.B.; Cao Yinhe; Lee, J.-M.

    2003-01-01

    Principal component analysis (PCA) is a popular data analysis method. One of the motivations for using PCA in practice is to reduce the dimension of the original data by projecting the raw data onto a few dominant eigenvectors with large variance (energy). Due to the ubiquity of 1/f α noise in science and engineering, in this Letter we study the prototypical stochastic model for 1/f α processes--the fractional Brownian motion (fBm) processes using PCA, and find that the eigenvalues from PCA of fBm processes follow a power-law, with the exponent being the key parameter defining the fBm processes. We also study random-walk-type processes constructed from DNA sequences, and find that the eigenvalue spectrum from PCA of those random-walk processes also follow power-law relations, with the exponent characterizing the correlation structures of the DNA sequence. In fact, it is observed that PCA can automatically remove linear trends induced by patchiness in the DNA sequence, hence, PCA has a similar capability to the detrended fluctuation analysis. Implications of the power-law distributed eigenvalue spectrum are discussed

  19. Surface composition of biomedical components by ion beam analysis

    International Nuclear Information System (INIS)

    Kenny, M.J.; Wielunski, L.S.; Baxter, G.R.

    1991-01-01

    Materials used for replacement body parts must satisfy a number of requirements such as biocompatibility and mechanical ability to handle the task with regard to strength, wear and durability. When using a CVD coated carbon fibre reinforced carbon ball, the surface must be ion implanted with uniform dose of nitrogen ions in order to make it wear resistant. The mechanism by which the wear resistance is improved is one of radiation damage and the required dose of about 10 16 cm -2 can have a tolerance of about 20%. To implant a spherical surface requires manipulation of the sample within the beam and control system (either computer or manually operated) to enable uniform dose all the way from polar to equatorial regions on the surface. A manipulator has been designed and built for this purpose. In order to establish whether the dose is uniform, nuclear reaction analysis using the reaction 14 N(d,α) 12 C is an ideal method of profiling. By taking measurements at a number of points on the surface, the uniformity of nitrogen dose can be ascertained. It is concluded that both Rutherford Backscattering and Nuclear Reaction Analysis can be used for rapid analysis of surface composition of carbon based materials used for replacement body components. 2 refs., 2 figs

  20. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  1. Preliminary study of soil permeability properties using principal component analysis

    Science.gov (United States)

    Yulianti, M.; Sudriani, Y.; Rustini, H. A.

    2018-02-01

    Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.

  2. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  3. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  4. Size distribution measurements and chemical analysis of aerosol components

    Energy Technology Data Exchange (ETDEWEB)

    Pakkanen, T.A.

    1995-12-31

    The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted

  5. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    Science.gov (United States)

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  6. Variational Bayesian Learning for Wavelet Independent Component Analysis

    Science.gov (United States)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  7. A Principal Component Analysis of Skills and Competencies Required of Quantity Surveyors: Nigerian Perspective

    OpenAIRE

    Oluwasuji Dada, Joshua

    2014-01-01

    The purpose of this paper is to examine the intrinsic relationships among sets of quantity surveyors’ skill and competence variables with a view to reducing them into principal components. The research adopts a data reduction technique using factor analysis statistical technique. A structured questionnaire was administered among major stakeholders in the Nigerian construction industry. The respondents were asked to give rating, on a 5 point Likert scale, on skills and competencies re...

  8. A two-component generalized extreme value distribution for precipitation frequency analysis

    Czech Academy of Sciences Publication Activity Database

    Rulfová, Zuzana; Buishand, A.; Roth, M.; Kyselý, Jan

    2016-01-01

    Roč. 534, March (2016), s. 659-668 ISSN 0022-1694 R&D Projects: GA ČR(CZ) GA14-18675S Institutional support: RVO:68378289 Keywords : precipitation extremes * two-component extreme value distribution * regional frequency analysis * convective precipitation * stratiform precipitation * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.483, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022169416000500

  9. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    OpenAIRE

    HRISTIAN Liliana; OSTAFE Maria Magdalena; BORDEIANU Demetra Lacramioara; APOSTOL Laura Liliana

    2017-01-01

    The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA). There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A s...

  10. Principal component analysis reveals gender-specific predictors of cardiometabolic risk in 6th graders

    Directory of Open Access Journals (Sweden)

    Peterson Mark D

    2012-11-01

    Full Text Available Abstract Background The purpose of this study was to determine the sex-specific pattern of pediatric cardiometabolic risk with principal component analysis, using several biological, behavioral and parental variables in a large cohort (n = 2866 of 6th grade students. Methods Cardiometabolic risk components included waist circumference, fasting glucose, blood pressure, plasma triglycerides levels and HDL-cholesterol. Principal components analysis was used to determine the pattern of risk clustering and to derive a continuous aggregate score (MetScore. Stratified risk components and MetScore were analyzed for association with age, body mass index (BMI, cardiorespiratory fitness (CRF, physical activity (PA, and parental factors. Results In both boys and girls, BMI and CRF were associated with multiple risk components, and overall MetScore. Maternal smoking was associated with multiple risk components in girls and boys, as well as MetScore in boys, even after controlling for children’s BMI. Paternal family history of early cardiovascular disease (CVD and parental age were associated with increased blood pressure and MetScore for girls. Children’s PA levels, maternal history of early CVD, and paternal BMI were also indicative for various risk components, but not MetScore. Conclusions Several biological and behavioral factors were independently associated with children’s cardiometabolic disease risk, and thus represent a unique gender-specific risk profile. These data serve to bolster the independent contribution of CRF, PA, and family-oriented healthy lifestyles for improving children’s health.

  11. Analysis of contaminants on electronic components by reflectance FTIR spectroscopy

    International Nuclear Information System (INIS)

    Griffith, G.W.

    1982-09-01

    The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included

  12. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  13. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    Science.gov (United States)

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  14. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  15. Failure characteristic analysis of a component on standby state

    International Nuclear Information System (INIS)

    Shin, Sungmin; Kang, Hyungook

    2013-01-01

    Periodic operations for a specific type of component, however, can accelerate aging effects which increase component unavailability. For the other type of components, the aging effect caused by operation can be ignored. Therefore frequent operations can decrease component unavailability. Thus, to get optimum unavailability proper operation period and method should be studied considering the failure characteristics of each component. The information of component failure is given according to the main causes of failure depending on time flow. However, to get the optimal unavailability, proper interval of operation for inspection should be decided considering the time dependent and independent causes together. According to this study, gradually shorter operation interval for inspection is better to get the optimal component unavailability than that of specific period

  16. Nuclear plant components: mechanical analysis and lifetime evaluation

    International Nuclear Information System (INIS)

    Chator, T.

    1993-09-01

    This paper concerns the methodology adopted by the Research and Development Division to handle mechanical problems found in structures and machines. Usually, these often very complex studies (3-D structures, complex loadings, non linear behavior laws) call for advanced tools and calculation means. In order to do these complex studies, R and D Division is developing a software. It handles very complex thermo-mechanical analysis using the Finite Element Method. It enables us to analyse static, dynamic, elasto-plastic problems as well as contact problems or evaluating damage and lifetime of structures. This paper will be illustrated by actual industrial case examples. The major ones will be dealing with: 1. Analysis of a new impeller/shaft assembly of a primary coolant pump. The 3D meshing is submitted simultaneously to thermal load, pressure, hydraulic, centrifugal and axial forces and clamping of studs; contacts between shaft/impeller, nuts bearing side/shaft bearing side. For this study, we have developed a new method to handle the clamping of studs. The stud elongation value is given into the software which automatically computes the distorsions between both the structures in contact and then the final position of bearing areas (using an iterative non-linear algorithm of modified Newton-Raphson type). 2. Analysis of the stress intensity factor of crack. The 3D meshing (representing the crack) is submitted simultaneously to axial and radial forces. In this case, we use the Theta method to calculate the energy restitution rate in order to determine the stress intensity factors. (authors). 7 figs., 1 tab., 3 refs

  17. Reliability Analysis of Fatigue Failure of Cast Components for Wind Turbines

    Directory of Open Access Journals (Sweden)

    Hesam Mirzaei Rafsanjani

    2015-04-01

    Full Text Available Fatigue failure is one of the main failure modes for wind turbine drivetrain components made of cast iron. The wind turbine drivetrain consists of a variety of heavily loaded components, like the main shaft, the main bearings, the gearbox and the generator. The failure of each component will lead to substantial economic losses such as cost of lost energy production and cost of repairs. During the design lifetime, the drivetrain components are exposed to variable loads from winds and waves and other sources of loads that are uncertain and have to be modeled as stochastic variables. The types of loads are different for offshore and onshore wind turbines. Moreover, uncertainties about the fatigue strength play an important role in modeling and assessment of the reliability of the components. In this paper, a generic stochastic model for fatigue failure of cast iron components based on fatigue test data and a limit state equation for fatigue failure based on the SN-curve approach and Miner’s rule is presented. The statistical analysis of the fatigue data is performed using the Maximum Likelihood Method which also gives an estimate of the statistical uncertainties. Finally, illustrative examples are presented with reliability analyses depending on various stochastic models and partial safety factors.

  18. SNIa detection in the SNLS photometric analysis using Morphological Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.; Palanque-Delabrouille, N. [Irfu, SPP, CEA Saclay, F-91191 Gif sur Yvette cedex (France); Lanusse, F.; Starck, J.-L., E-mail: anais.moller@cea.fr, E-mail: vanina.ruhlmann-kleider@cea.fr, E-mail: francois.lanusse@cea.fr, E-mail: jeremy.neveu@cea.fr, E-mail: nathalie.palanque-delabrouille@cea.fr, E-mail: jstarck@cea.fr [Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp, CEA Saclay, F-91191 Gif sur Yvette cedex (France)

    2015-04-01

    Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.

  19. Major component analysis of dynamic networks of physiologic organ interactions

    International Nuclear Information System (INIS)

    Liu, Kang K L; Ma, Qianli D Y; Ivanov, Plamen Ch; Bartsch, Ronny P

    2015-01-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function. (paper)

  20. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  1. A meta-analysis of executive components of working memory.

    Science.gov (United States)

    Nee, Derek Evan; Brown, Joshua W; Askren, Mary K; Berman, Marc G; Demiralp, Emre; Krawitz, Adam; Jonides, John

    2013-02-01

    Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM "work." However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.

  2. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  3. Finite element elastic-plastic analysis of LMFBR components

    International Nuclear Information System (INIS)

    Levy, A.; Pifko, A.; Armen, H. Jr.

    1978-01-01

    The present effort involves the development of computationally efficient finite element methods for accurately predicting the isothermal elastic-plastic three-dimensional response of thick and thin shell structures subjected to mechanical and thermal loads. This work will be used as the basis for further development of analytical tools to be used to verify the structural integrity of liquid metal fast breeder reactor (LMFBR) components. The methods presented here have been implemented into the three-dimensional solid element module (HEX) of the Grumman PLANS finite element program. These methods include the use of optimal stress points as well as a variable number of stress points within an element. This allows monitoring the stress history at many points within an element and hence provides an accurate representation of the elastic-plastic boundary using a minimum number of degrees of freedom. Also included is an improved thermal stress analysis capability in which the temperature variation and corresponding thermal strain variation are represented by the same functional form as the displacement variation. Various problems are used to demonstrate these improved capabilities. (Auth.)

  4. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

    National Research Council Canada - National Science Library

    Qi, Yuan

    2000-01-01

    In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

  5. Component Analysis of Bee Venom from lune to September

    Directory of Open Access Journals (Sweden)

    Ki Rok Kwon

    2007-06-01

    Full Text Available Objectives : The aim of this study was to observe variation of Bee Venom content from the collection period. Methods : Content analysis of Bee Venom was rendered using HPLC method by standard melittin Results : Analyzing melittin content using HPLC, 478.97mg/g at june , 493.89mg/g at july, 468.18mg/g at August and 482.15mg/g was containing in Bee Venom at september. So the change of melittin contents was no significance from June to September. Conclusion : Above these results, we concluded carefully that collecting time was not important factor for the quality control of Bee Venom, restricted the period from June to September.

  6. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    HRISTIAN Liliana

    2017-05-01

    Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.

  7. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  8. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    Directory of Open Access Journals (Sweden)

    Asachi Maryam

    2017-01-01

    Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  9. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  10. Assessment of genetic divergence in tomato through agglomerative hierarchical clustering and principal component analysis

    International Nuclear Information System (INIS)

    Iqbal, Q.; Saleem, M.Y.; Hameed, A.; Asghar, M.

    2014-01-01

    For the improvement of qualitative and quantitative traits, existence of variability has prime importance in plant breeding. Data on different morphological and reproductive traits of 47 tomato genotypes were analyzed for correlation,agglomerative hierarchical clustering and principal component analysis (PCA) to select genotypes and traits for future breeding program. Correlation analysis revealed significant positive association between yield and yield components like fruit diameter, single fruit weight and number of fruits plant-1. Principal component (PC) analysis depicted first three PCs with Eigen-value higher than 1 contributing 81.72% of total variability for different traits. The PC-I showed positive factor loadings for all the traits except number of fruits plant-1. The contribution of single fruit weight and fruit diameter was highest in PC-1. Cluster analysis grouped all genotypes into five divergent clusters. The genotypes in cluster-II and cluster-V exhibited uniform maturity and higher yield. The D2 statistics confirmed highest distance between cluster- III and cluster-V while maximum similarity was observed in cluster-II and cluster-III. It is therefore suggested that crosses between genotypes of cluster-II and cluster-V with those of cluster-I and cluster-III may exhibit heterosis in F1 for hybrid breeding and for selection of superior genotypes in succeeding generations for cross breeding programme. (author)

  11. Development of safety factors to be used for evaluation of cracked nuclear components

    International Nuclear Information System (INIS)

    Brickstad, B.; Bergman, M.

    1996-10-01

    A modified concept for safety evaluation is introduced which separately accounts for the failure mechanisms fracture and plastic collapse. For application on nuclear components a set of safety factors are also proposed that retain the safety margins expressed in ASME, section III and XI. By performing comparative studies of the acceptance levels for surface cracks in pipes and a pressure vessel, it is shown that some of the anomalies connected with the old safety procedures are removed. It is the authors belief that the outlined safety evaluation procedure has the capability of treating cracks in a consistent way and that the procedure together with the proposed safety factors fulfill the basic safety requirements for nuclear components. Hopefully, it is possible in the near future to develop a probabilistic safety assessment procedure in Sweden, which enables a systematic treatment of uncertainties in the involved data. 14 refs

  12. Prevalence, associated factors and heritabilities of metabolic syndrome and its individual components in African Americans: the Jackson Heart Study.

    Science.gov (United States)

    Khan, Rumana J; Gebreab, Samson Y; Sims, Mario; Riestra, Pia; Xu, Ruihua; Davis, Sharon K

    2015-11-01

    Both environmental and genetic factors play important roles in the development of metabolic syndrome (MetS). Studies about its associated factors and genetic contribution in African Americans (AA) are sparse. Our aim was to report the prevalence, associated factors and heritability estimates of MetS and its components in AA men and women. Data of this cross-sectional study come from a large community-based Jackson Heart Study (JHS). We analysed a total of 5227 participants, of whom 1636 from 281 families were part of a family study subset of JHS. Participants were classified as having MetS according to the Adult Treatment Panel III criteria. Multiple logistic regression analysis was performed to isolate independently associated factors of MetS (n=5227). Heritability was estimated from the family study subset using variance component methods (n=1636). About 27% of men and 40% of women had MetS. For men, associated factors with having MetS were older age, lower physical activity, higher body mass index, and higher homocysteine and adiponectin levels (pmetabolism playing a central role in the development of MetS and encourage additional efforts to identify the underlying susceptibility genes for this syndrome in AA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Principal component analysis of cardiovascular risk traits in three generations cohort among Indian Punjabi population

    Directory of Open Access Journals (Sweden)

    Badaruddoza

    2015-09-01

    Full Text Available The current study focused to determine significant cardiovascular risk factors through principal component factor analysis (PCFA among three generations on 1827 individuals in three generations including 911 males (378 from offspring, 439 from parental and 94 from grand-parental generations and 916 females (261 from offspring, 515 from parental and 140 from grandparental generations. The study performed PCFA with orthogonal rotation to reduce 12 inter-correlated variables into groups of independent factors. The factors have been identified as 2 for male grandparents, 3 for male offspring, female parents and female grandparents each, 4 for male parents and 5 for female offspring. This data reduction method identified these factors that explained 72%, 84%, 79%, 69%, 70% and 73% for male and female offspring, male and female parents and male and female grandparents respectively, of the variations in original quantitative traits. The factor 1 accounting for the largest portion of variations was strongly loaded with factors related to obesity (body mass index (BMI, waist circumference (WC, waist to hip ratio (WHR, and thickness of skinfolds among all generations with both sexes, which has been known to be an independent predictor for cardiovascular morbidity and mortality. The second largest components, factor 2 and factor 3 for almost all generations reflected traits of blood pressure phenotypes loaded, however, in male offspring generation it was observed that factor 2 was loaded with blood pressure phenotypes as well as obesity. This study not only confirmed but also extended prior work by developing a cumulative risk scale from factor scores. Till today, such a cumulative and extensive scale has not been used in any Indian studies with individuals of three generations. These findings and study highlight the importance of global approach for assessing the risk and need for studies that elucidate how these different cardiovascular risk factors

  14. Principal component analysis of cardiovascular risk traits in three generations cohort among Indian Punjabi population.

    Science.gov (United States)

    Badaruddoza; Kumar, Raman; Kaur, Manpreet

    2015-09-01

    The current study focused to determine significant cardiovascular risk factors through principal component factor analysis (PCFA) among three generations on 1827 individuals in three generations including 911 males (378 from offspring, 439 from parental and 94 from grand-parental generations) and 916 females (261 from offspring, 515 from parental and 140 from grandparental generations). The study performed PCFA with orthogonal rotation to reduce 12 inter-correlated variables into groups of independent factors. The factors have been identified as 2 for male grandparents, 3 for male offspring, female parents and female grandparents each, 4 for male parents and 5 for female offspring. This data reduction method identified these factors that explained 72%, 84%, 79%, 69%, 70% and 73% for male and female offspring, male and female parents and male and female grandparents respectively, of the variations in original quantitative traits. The factor 1 accounting for the largest portion of variations was strongly loaded with factors related to obesity (body mass index (BMI), waist circumference (WC), waist to hip ratio (WHR), and thickness of skinfolds) among all generations with both sexes, which has been known to be an independent predictor for cardiovascular morbidity and mortality. The second largest components, factor 2 and factor 3 for almost all generations reflected traits of blood pressure phenotypes loaded, however, in male offspring generation it was observed that factor 2 was loaded with blood pressure phenotypes as well as obesity. This study not only confirmed but also extended prior work by developing a cumulative risk scale from factor scores. Till today, such a cumulative and extensive scale has not been used in any Indian studies with individuals of three generations. These findings and study highlight the importance of global approach for assessing the risk and need for studies that elucidate how these different cardiovascular risk factors interact with

  15. Determination of the usage factor of components after cyclic loading using high-resolution microstructural investigations

    International Nuclear Information System (INIS)

    Seibold, A.; Scheibe, A.; Assmann, H.D.

    1989-01-01

    The usage factor can be derived from the quantification of the structure changes and the allocation of the microstructural state to the fatigue curves of the component materials. Using the example of the low alloy fine grain structural steel 20 Mn Mo Ni 5 5 (annealed structure), the relationship between micro-structure and the number of load cycles is shown in the form of a calibration curve. By high resolution structural investigation, the usage factor can be determined to n = N/N B ≅ 0.5 under given vibration stress. Only a small volume sample is required for the electron microscope examination. (orig./DG) [de

  16. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    International Nuclear Information System (INIS)

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  17. Organizational Design Analysis of Fleet Readiness Center Southwest Components Department

    National Research Council Canada - National Science Library

    Montes, Jose F

    2007-01-01

    .... The purpose of this MBA Project is to analyze the proposed organizational design elements of the FRCSW Components Department that resulted from the integration of the Naval Aviation Depot at North Island (NADEP N.I...

  18. The Socioeconomic Factors and the Indigenous Component of Tuberculosis in Amazonas

    Science.gov (United States)

    2016-01-01

    Despite the availability of tuberculosis prevention and control services throughout Amazonas, high rates of morbidity and mortality from tuberculosis remain in the region. Knowledge of the social determinants of tuberculosis in Amazonas is important for the establishment of public policies and the planning of effective preventive and control measures for the disease. To analyze the relationship of the spatial distribution of the incidence of tuberculosis in municipalities and regions of Amazonas to the socioeconomic factors and indigenous tuberculosis component, from 2007 to 2013. An ecological study was conducted based on secondary data from the epidemiological surveillance of tuberculosis. A linear regression model was used to analyze the relationship of the annual incidence of tuberculosis to the socioeconomic factors, performance indicators of health services, and indigenous tuberculosis component. The distribution of the incidence of tuberculosis in the municipalities of Amazonas was positively associated with the Gini index and the population attributable fraction of tuberculosis in the indigenous peoples, but negatively associated with the proportion of the poor and the unemployment rate. The spatial distribution of tuberculosis in the different regions of Amazonas was heterogeneous and closely related with the socioeconomic factors and indigenous component of tuberculosis. PMID:27362428

  19. Principal variance component analysis of crop composition data: a case study on herbicide-tolerant cotton.

    Science.gov (United States)

    Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D

    2013-07-03

    Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.

  20. Investigating product development strategy in beverage industry using factor analysis

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.

  1. Analysis of the frequency components of X-ray images

    International Nuclear Information System (INIS)

    Matsuo, Satoru; Komizu, Mitsuru; Kida, Tetsuo; Noma, Kazuo; Hashimoto, Keiji; Onishi, Hideo; Masuda, Kazutaka

    1997-01-01

    We examined the relation between the frequency components of x-ray images of the chest and phalanges and their read sizes for digitizing. Images of the chest and phalanges were radiographed using three types of screens and films, and the noise images in background density were digitized with a drum scanner, changing the read sizes. The frequency components for these images were evaluated by converting them to the secondary Fourier to obtain the power spectrum and signal to noise ratio (SNR). After changing the cut-off frequency on the power spectrum to process a low pass filter, we also examined the frequency components of the images in relation to the normalized mean square error (NMSE) for the image converted to reverse Fourier and the original image. Results showed that the frequency components were 2.0 cycles/mm for the chest image and 6.0 cycles/mm for the phalanges. Therefore, it is necessary to collect data applying the read sizes of 200 μm and 50 μm for the chest and phalangeal images, respectively, in order to digitize these images without loss of their frequency components. (author)

  2. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  3. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  4. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  5. Analysis of Bernstein's factorization circuit

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Tomlinson, J.; Tromer, E.; Zheng, Y.

    2002-01-01

    In [1], Bernstein proposed a circuit-based implementation of the matrix step of the number field sieve factorization algorithm. These circuits offer an asymptotic cost reduction under the measure "construction cost x run time". We evaluate the cost of these circuits, in agreement with [1], but argue

  6. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  7. Independent component analysis of edge information for face recognition

    CERN Document Server

    Karande, Kailash Jagannath

    2013-01-01

    The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos

  8. Eliminating the Influence of Harmonic Components in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. These forces are seen as harmonic components in the responses, and their influence should be eliminated before extracting the modes in their vicinity. This paper describes a new method based...... on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic...

  9. Analysis Components of the Digital Consumer Behavior in Romania

    Directory of Open Access Journals (Sweden)

    Cristian Bogdan Onete

    2016-08-01

    Full Text Available This article is investigating the Romanian consumer behavior in the context of the evolution of the online shopping. Given that online stores are a profitable business model in the area of electronic commerce and because the relationship between consumer digital Romania and its decision to purchase products or services on the Internet has not been sufficiently explored, this study aims to identify specific features of the new type of consumer and to examine the level of online shopping in Romania. Therefore a documentary study was carried out with statistic data regarding the volume and the number of transactions of the online shopping in Romania during 2010-2014, the type of products and services that Romanians are searching the Internet for and demographics of these people. In addition, to study more closely the online consumer behavior, and to interpret the detailed secondary data provided, an exploratory research was performed as a structured questionnaire with five closed questions on the distribution of individuals according to the gender category they belong (male or female; decision to purchase products / services in the virtual environment in the past year; the source of the goods / services purchased (Romanian or foreign sites; factors that have determined the consumers to buy products from foreign sites; categories of products purchased through online transactions from foreign merchants. The questionnaire was distributed electronically via Facebook social network users and the data collected was processed directly in the Facebook official app to create and interpret responses to surveys. The results of this research correlated with the official data reveals the following characteristics of the digital consumer in Romania: atypical European consumer, interested more in online purchases from abroad, influenced by the quality and price of the purchase. This paper assumed a careful analysis of the online acquisitions phenomenon and also

  10. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  11. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  12. Design and analysis of automobile components using industrial procedures

    Science.gov (United States)

    Kedar, B.; Ashok, B.; Rastogi, Nisha; Shetty, Siddhanth

    2017-11-01

    Today’s automobiles depend upon mechanical systems that are crucial for aiding in the movement and safety features of the vehicle. Various safety systems such as Antilock Braking System (ABS) and passenger restraint systems have been developed to ensure that in the event of a collision be it head on or any other type, the safety of the passenger is ensured. On the other side, manufacturers also want their customers to have a good experience while driving and thus aim to improve the handling and the drivability of the vehicle. Electronics systems such as Cruise Control and active suspension systems are designed to ensure passenger comfort. Finally, to ensure optimum and safe driving the various components of a vehicle must be manufactured using the latest state of the art processes and must be tested and inspected with utmost care so that any defective component can be prevented from being sent out right at the beginning of the supply chain. Therefore, processes which can improve the lifetime of their respective components are in high demand and much research and development is done on these processes. With a solid base research conducted, these processes can be used in a much more versatile manner for different components, made up of different materials and under different input conditions. This will help increase the profitability of the process and also upgrade its value to the industry.

  13. Analysis of soft rock mineral components and roadway failure mechanism

    Institute of Scientific and Technical Information of China (English)

    陈杰

    2001-01-01

    The mineral components and microstructure of soft rock sampled from roadway floor inXiagou pit are determined by X-ray diffraction and scanning electron microscope. Ccmbined withthe test of expansion and water softening property of the soft rock, the roadway failure mechanism is analyzed, and the reasonable repair supporting principle of roadway is put forward.

  14. Analysis Of The Executive Components Of The Farmer Field School ...

    African Journals Online (AJOL)

    The purpose of this study was to investigate the executive components of the Farmer Field School (FFS) project in Uromieh county of West Azerbaijan Province, Iran. All the members and non-members (as control group) of FFS pilots in Uromieh county (N= 98) were included in the study. Data were collected by use of ...

  15. Phenolic components, antioxidant activity, and mineral analysis of ...

    African Journals Online (AJOL)

    In addition to being consumed as food, caper (Capparis spinosa L.) fruits are also used in folk medicine to treat inflammatory disorders, such as rheumatism. C. spinosa L. is rich in phenolic compounds, making it increasingly popular because of its components' potential benefits to human health. We analyzed a number of ...

  16. Determinants of Return on Assets in Romania: A Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Sorana Vatavu

    2015-03-01

    Full Text Available This paper examines the impact of capital structure, as well as its determinants on the financial performance of Romanian companies listed on the Bucharest Stock Exchange. The analysis is based on cross sectional regressions and factor analysis, and it refers to a ten-year period (2003-2012. Return on assets (ROA is the performance proxy, while the capital structure indicator is debt ratio. Regression results indicate that Romanian companies register higher returns when they operate with limited borrowings. Among the capital structure determinants, tangibility and business risk have a negative impact on ROA, but the level of taxation has a positive effect, showing that companies manage their assets more efficiently during times of higher fiscal pressure. Performance is sustained by sales turnover, but not significantly influenced by high levels of liquidity. Periods of unstable economic conditions, reflected by high inflation rates and the current financial crisis, have a strong negative impact on corporate performance. Based on regression results, three factors were considered through the method of iterated principal component factors: the first one incorporates debt and size, as an indicator of consumption, the second one integrates the influence of tangibility and liquidity, marking the investment potential, and the third one is an indicator of assessed risk, integrating the volatility of earnings with the level of taxation. ROA is significantly influenced by these three factors, regardless the regression method used. The consumption factor has a negative impact on performance, while the investment and risk variables positively influence ROA.

  17. Analysis of safety culture components based on site interviews

    International Nuclear Information System (INIS)

    Ueno, Akira; Nagano, Yuko; Matsuura, Shojiro

    2002-01-01

    Safety culture of an organization is influenced by many factors such as employee's moral, safety policy of top management and questioning attitude among site staff. First this paper analyzes key factors of safety culture on the basis of site interviews. Then the paper presents a safety culture composite model and its applicability in various contexts. (author)

  18. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  19. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  20. Using containment analysis to improve component cooling water heat exchanger limits

    International Nuclear Information System (INIS)

    Da Silva, H.C.; Tajbakhsh, A.

    1995-01-01

    The Comanche Peak Steam Electric Station design requires that exit temperatures from the Component Cooling Water Heat Exchanger remain below 330.37 K during the Emergency Core Cooling System recirculation stage, following a hypothetical Loss of Coolant Accident (LOCA). Due to measurements indicating a higher than expected combination of: (a) high fouling factor in the Component Cooling Water Heat Exchanger with (b) high ultimate heat sink temperatures, that might lead to temperatures in excess of the 330.37 K limit, if a LOCA were to occur, TUElectric adjusted key flow rates in the Component Cooling Water network. This solution could only be implemented with improvements to the containment analysis methodology of record. The new method builds upon the CONTEMPT-LT/028 code by: (a) coupling the long term post-LOCA thermohydraulics with a more detailed analytical model for the complex Component Cooling Water Heat Exchanger network and (b) changing the way mass and energy releases are calculated after core reflood and steam generator energy is dumped to the containment. In addition, a simple code to calculate normal cooldowns was developed to confirm RHR design bases were met with the improved limits

  1. Analysis and test of insulated components for rotary engine

    Science.gov (United States)

    Badgley, Patrick R.; Doup, Douglas; Kamo, Roy

    1989-01-01

    The direct-injection stratified-charge (DISC) rotary engine, while attractive for aviation applications due to its light weight, multifuel capability, and potentially low fuel consumption, has until now required a bulky and heavy liquid-cooling system. NASA-Lewis has undertaken the development of a cooling system-obviating, thermodynamically superior adiabatic rotary engine employing state-of-the-art thermal barrier coatings to thermally insulate engine components. The thermal barrier coating material for the cast aluminum, stainless steel, and ductile cast iron components was plasma-sprayed zirconia. DISC engine tests indicate effective thermal barrier-based heat loss reduction, but call for superior coefficient-of-thermal-expansion matching of materials and better tribological properties in the coatings used.

  2. Study of Seasonal Variation in Groundwater Quality of Sagar City (India by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Hemant Pathak

    2011-01-01

    Full Text Available Groundwater is one of the major resources of the drinking water in Sagar city (India.. In this study 15 sampling station were selected for the investigations on 14 chemical parameters. The work was carried out during different months of the pre-monsoon, monsoon and post-monsoon seasons in June 2009 to June 2010. The multivariate statistics such as principal component and cluster analysis were applied to the datasets to investigate seasonal variations in groundwater quality. Principal axis factoring has been used to observe the mode of association of parameters and their interrelationships, for evaluating water quality. Average value of BOD, COD, ammonia and iron was high during entire study period. Elevated values of BOD and ammonia in monsoon, slightly more value of BOD in post-monsoon, BOD, ammonia and iron in pre-monsoon period reflected contribution on temporal effect on groundwater. Results of principal component analysis evinced that all the parameters equally and significantly contribute to groundwater quality variations. Factor 1 and factor 2 analysis revealed the DO value deteriorate due to organic load (BOD/Ammonia in different seasons. Hierarchical cluster analysis grouped 15 stations into four clusters in monsoon, five clusters in post-monsoon and five clusters in pre-monsoon with similar water quality features. Clustered group at monsoon, post-monsoon and pre-monsoon consisted one station exhibiting significant spatial variation in physicochemical composition. The anthropogenic nitrogenous species, as fallout from modernization activities. The study indicated that the groundwater sufficiently well oxygenated and nutrient-rich in study places.

  3. COMPONENTS OF THE UNEMPLOYMENT ANALYSIS IN CONTEMPORARY ECONOMIES

    Directory of Open Access Journals (Sweden)

    Ion Enea-SMARANDACHE

    2010-03-01

    Full Text Available The unemployment is a permanent phenomenon in majority countries of the world, either with advanced economies, either in course of developed economies, and the implications and the consequences are more complexes, so that, practically, the fight with unemployment becomes a fundamental objective for the economy politics. In context, the authors proposed to set apart essentially components for unemployment analyse with the scope of identification the measures and the instruments of counteracted.

  4. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    International Nuclear Information System (INIS)

    2011-01-01

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/ o C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  5. Analysis of the Components of Economic Potential of Agricultural Enterprises

    OpenAIRE

    Vyacheslav Skobara; Volodymyr Podkopaev

    2014-01-01

    Problems of efficiency of enterprises are increasingly associated with the use of the economic potential of the company. This article addresses the structural components of the economic potential of agricultural enterprise, development and substantiation of the model of economic potential with due account of the peculiarities of agricultural production. Based on the study of various approaches to the potential structure established is the definition of of production, labour, financial and man...

  6. Pursuing an ecological component for the Effect Factor in LCIA methods

    DEFF Research Database (Denmark)

    Cosme, Nuno Miguel Dias; Bjørn, Anders; Rosenbaum, Ralph K.

    have also been altered by past impacts. Model frameworks are usually built on stability, linearity of causality and expectation of a safe return to stable states if the stressor is minimised. However, the command-and-control paradigm has resulted in the erosion of natural resources and species...... EC50-based) or 1 (assuming that continuous stress affects reproduction rate), but these are all based on biological/physiological responses and do not add a true ecological component to the impact. Such factor simply changes the HC50 by 1 or 0.3 log units. A stressor with equal intensity in two...

  7. Detailed Structural Analysis of Critical Wendelstein 7-X Magnet System Components

    International Nuclear Information System (INIS)

    Egorov, K.

    2006-01-01

    The Wendelstein 7-X (W7-X) stellarator experiment is presently under construction and assembly in Greifswald, Germany. The goal of the experiment is to verify that the stellarator magnetic confinement concept is a viable option for a fusion reactor. The complex W7-X magnet system requires a multi-level approach to structural analysis for which two types of finite element models are used: Firstly, global models having reasonably coarse meshes with a number of simplifications and assumptions, and secondly, local models with detailed meshes of critical regions and elements. Widely known sub-modelling technique with boundary conditions extracted from the global models is one of the approaches for local analysis with high assessment efficiency. In particular, the winding pack (WP) of the magnet coils is simulated in the global model as a homogeneous orthotropic material with effective mechanical characteristic representing its real composite structure. This assumption allows assessing the whole magnet system in terms of general structural factors like forces and moments on the support elements, displacements of the main components, deformation and stress in the coil casings, etc. In a second step local models with a detailed description of more critical WP zones are considered in order to analyze their internal components like conductor jackets, turn insulation, etc. This paper provides an overview of local analyses of several critical W7-X magnet system components with particular attention on the coil winding packs. (author)

  8. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  9. Using Factor Analysis to Identify Topic Preferences Within MBA Courses

    Directory of Open Access Journals (Sweden)

    Earl Chrysler

    2003-02-01

    Full Text Available This study demonstrates the role of a principal components factor analysis in conducting a gap analysis as to the desired characteristics of business alumni. Typically, gap analyses merely compare the emphases that should be given to areas of inquiry with perceptions of actual emphases. As a result, the focus is upon depth of coverage. A neglected area in need of investigation is the breadth of topic dimensions and their differences between the normative (should offer and the descriptive (actually offer. The implications of factor structures, as well as traditional gap analyses, are developed and discussed in the context of outcomes assessment.

  10. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    Science.gov (United States)

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  11. Probabilistic structural analysis of aerospace components using NESSUS

    Science.gov (United States)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  12. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  13. Seismic fragility analysis of structural components for HFBR facilities

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.

    1992-01-01

    The paper presents a summary of recently completed seismic fragility analyses of the HFBR facilities. Based on a detailed review of past PRA studies, various refinements were made regarding the strength and ductility evaluation of structural components. Available laboratory test data were analysed to evaluate the formulations used to predict the ultimate strength and deformation capacities of steel, reinforced concrete and masonry structures. The biasness and uncertainties were evaluated within the framework of the fragility evaluation methods widely accepted in the nuclear industry. A few examples of fragility calculations are also included to illustrate the use of the presented formulations

  14. The ethical component of professional competence in nursing: an analysis.

    Science.gov (United States)

    Paganini, Maria Cristina; Yoshikawa Egry, Emiko

    2011-07-01

    The purpose of this article is to initiate a philosophical discussion about the ethical component of professional competence in nursing from the perspective of Brazilian nurses. Specifically, this article discusses professional competence in nursing practice in the Brazilian health context, based on two different conceptual frameworks. The first framework is derived from the idealistic and traditional approach while the second views professional competence through the lens of historical and dialectical materialism theory. The philosophical analyses show that the idealistic view of professional competence differs greatly from practice. Combining nursing professional competence with philosophical perspectives becomes a challenge when ideals are opposed by the reality and implications of everyday nursing practice.

  15. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  16. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  17. The Blame Game: Performance Analysis of Speaker Diarization System Components

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; Wooters, Chuck

    2007-01-01

    In this paper we discuss the performance analysis of a speaker diarization system similar to the system that was submitted by ICSI at the NIST RT06s evaluation benchmark. The analysis that is based on a series of oracle experiments, provides a good understanding of the performance of each system

  18. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    Science.gov (United States)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity

  19. Multi-component joint analysis of surface waves

    Czech Academy of Sciences Publication Activity Database

    Dal Moro, Giancarlo; Moura, R.M.M.; Moustafa, S.S.R.

    2015-01-01

    Roč. 119, AUG (2015), s. 128-138 ISSN 0926-9851 Institutional support: RVO:67985891 Keywords : surface waves * surface wave dispersion * seismic data acquisition * seismic data inversion * velocity spectrum Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.355, year: 2015

  20. Analysis of Moisture Content in Beetroot using Fourier Transform Infrared Spectroscopy and by Principal Component Analysis.

    Science.gov (United States)

    Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah

    2018-05-22

    The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.

  1. Automatic scatter detection in fluorescence landscapes by means of spherical principal component analysis

    DEFF Research Database (Denmark)

    Kotwa, Ewelina Katarzyna; Jørgensen, Bo Munk; Brockhoff, Per B.

    2013-01-01

    In this paper, we introduce a new method, based on spherical principal component analysis (S‐PCA), for the identification of Rayleigh and Raman scatters in fluorescence excitation–emission data. These scatters should be found and eliminated as a prestep before fitting parallel factor analysis...... models to the data, in order to avoid model degeneracies. The work is inspired and based on a previous research, where scatter removal was automatic (based on a robust version of PCA called ROBPCA) and required no visual data inspection but appeared to be computationally intensive. To overcome...... this drawback, we implement the fast S‐PCA in the scatter identification routine. Moreover, an additional pattern interpolation step that complements the method, based on robust regression, will be applied. In this way, substantial time savings are gained, and the user's engagement is restricted to a minimum...

  2. Dynamic analysis and qualification test of nuclear components

    International Nuclear Information System (INIS)

    Kim, B.K.; Lee, C.H.; Park, S.H.; Kim, Y.M.; Kim, B.S.; Kim, I.G.; Chung, C.W.; Kim, Y.M.

    1981-01-01

    This report contains the study on the dynamic characteristics of Wolsung fuel rod and on the dynamic balancing of rotating machinery to evaluate the performance of nuclear reactor components. The study on the dynamic characteristics of Wolsung fuel rod was carried out by both experimental and theoretical methods. Forced vibration testing of actual Wolsung fuel rod using sine sweep and sine dwell excitation was conducted to find the dynamic and nonlinear characteristics of the fuel rod. The data obtained by the test were used to analyze the nonlinear impact characteristics of the fuel rod which has a motion-constraint stop in the center of the rod. The parameters used in the test were the input force level of the exciter, the clearance gap between the fuel rod and the motion constraints, and the frequencies. Test results were in good agreement with the analytical results

  3. Dissolution And Analysis Of Yellowcake Components For Fingerprinting UOC Sources

    International Nuclear Information System (INIS)

    Hexel, Cole R.; Bostick, Debra A.; Kennedy, Angel K.; Begovich, John M.; Carter, Joel A.

    2012-01-01

    There are a number of chemical and physical parameters that might be used to help elucidate the ore body from which uranium ore concentrate (UOC) was derived. It is the variation in the concentration and isotopic composition of these components that can provide information as to the identity of the ore body from which the UOC was mined and the type of subsequent processing that has been undertaken. Oak Ridge National Laboratory (ORNL) in collaboration with Lawrence Livermore and Los Alamos National Laboratories is surveying ore characteristics of yellowcake samples from known geologic origin. The data sets are being incorporated into a national database to help in sourcing interdicted material, as well as aid in safeguards and nonproliferation activities. Geologic age and attributes from chemical processing are site-specific. Isotopic abundances of lead, neodymium, and strontium provide insight into the provenance of geologic location of ore material. Variations in lead isotopes are due to the radioactive decay of uranium in the ore. Likewise, neodymium isotopic abundances are skewed due to the radiogenic decay of samarium. Rubidium decay similarly alters the isotopic signature of strontium isotopic composition in ores. This paper will discuss the chemical processing of yellowcake performed at ORNL. Variations in lead, neodymium, and strontium isotopic abundances are being analyzed in UOC from two geologic sources. Chemical separation and instrumental protocols will be summarized. The data will be correlated with chemical signatures (such as elemental composition, uranium, carbon, and nitrogen isotopic content) to demonstrate the utility of principal component and cluster analyses to aid in the determination of UOC provenance.

  4. Proof of fatigue strength of nuclear components part II: Numerical fatigue analysis for transient stratification loading considering environmental effects

    International Nuclear Information System (INIS)

    Krätschmer, D.; Roos, E.; Schuler, X.; Herter, K.-H.

    2012-01-01

    For the construction, design and operation of nuclear components and systems the appropriate technical codes and standards provide detailed analysis procedures which guarantee a reliable behaviour of the structural components throughout the specified lifetime. Especially for cyclic stress evaluation the different codes and standards provide different fatigue analyses procedures to be performed considering the various mechanical and thermal loading histories and geometric complexities of the components. To consider effects of light water reactor coolant environments, new design curves included in report NUREG/CR-6909 for austenitic stainless steels and for low alloy steels have been presented. For the usage of these new design curves an environmental fatigue correction factor for incorporating environmental effects has to be calculated and used. The application of this environmental correction factor to a fatigue analysis of a nozzle with transient stratification loads, derived by in-service monitoring, has been performed. The results are used to compare with calculated usage factors, based on design curves without taking environmental effects particularly into account. - Highlights: ► We model an nozzle for fatigue analysis und mechanical and thermal loading conditions. ► A simplified as well as a general elastic–plastic fatigue analysis considering environmental effects is performed. ► The influence of different factors calculating the environmental factor F en are shown. ► The presented numerical evaluation methodology allows the consideration of all relevant parameters to assess lifetime.

  5. [Principal component analysis and cluster analysis of inorganic elements in sea cucumber Apostichopus japonicus].

    Science.gov (United States)

    Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie

    2011-11-01

    The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.

  6. Kernel Principle Component Analysis of Microarray Data. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Haghighi, F.

    2003-11-06

    Given the limitations in the quantity and quality of gene expression array data, the focus of the research was shifted to development of statistical and computational tools for evaluation and detection of disease susceptibility mutations within a large set of individuals or even an entire population. The diseases that are of particular interest are those with complex etiology, involving interaction of multiple genes and/or environmental factors.

  7. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    Energy Technology Data Exchange (ETDEWEB)

    Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  8. Topochemical Analysis of Cell Wall Components by TOF-SIMS.

    Science.gov (United States)

    Aoki, Dan; Fukushima, Kazuhiko

    2017-01-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.

  9. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  10. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    Science.gov (United States)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  11. Multilevel component analysis of time-resolved metabolic fingerprinting data

    NARCIS (Netherlands)

    Jansen, J.J.; Hoefsloot, H.C.J.; Greef, J. van der; Timmerman, M.E.; Smilde, A.K.

    2005-01-01

    Genomics-based technologies in systems biology have gained a lot of popularity in recent years. These technologies generate large amounts of data. To obtain information from this data, multivariate data analysis methods are required. Many of the datasets generated in genomics are multilevel

  12. Analysis of technological, institutional and socioeconomic factors ...

    African Journals Online (AJOL)

    Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...

  13. Assessing the effect of oil price on world food prices: Application of principal component analysis

    International Nuclear Information System (INIS)

    Esmaeili, Abdoulkarim; Shokoohi, Zainab

    2011-01-01

    The objective of this paper is to investigate the co-movement of food prices and the macroeconomic index, especially the oil price, by principal component analysis to further understand the influence of the macroeconomic index on food prices. We examined the food prices of seven major products: eggs, meat, milk, oilseeds, rice, sugar and wheat. The macroeconomic variables studied were crude oil prices, consumer price indexes, food production indexes and GDP around the world between 1961 and 2005. We use the Scree test and the proportion of variance method for determining the optimal number of common factors. The correlation coefficient between the extracted principal component and the macroeconomic index varies between 0.87 for the world GDP and 0.36 for the consumer price index. We find the food production index has the greatest influence on the macroeconomic index and that the oil price index has an influence on the food production index. Consequently, crude oil prices have an indirect effect on food prices. - Research Highlights: →We investigate the co-movement of food prices and the macroeconomic index. →The crude oil price has indirect effect on the world GDP via its impacts on food production index. →The food production index is the source of causation for CPI and GDP is affected by CPI. →The results confirm an indirect effect among oil price, food price principal component.

  14. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  15. Interoperability Assets for Patient Summary Components: A Gap Analysis.

    Science.gov (United States)

    Heitmann, Kai U; Cangioli, Giorgio; Melgara, Marcello; Chronaki, Catherine

    2018-01-01

    The International Patient Summary (IPS) standards aim to define the specifications for a minimal and non-exhaustive Patient Summary, which is specialty-agnostic and condition-independent, but still clinically relevant. Meanwhile, health systems are developing and implementing their own variation of a patient summary while, the eHealth Digital Services Infrastructure (eHDSI) initiative is deploying patient summary services across countries in the Europe. In the spirit of co-creation, flexible governance, and continuous alignment advocated by eStandards, the Trillum-II initiative promotes adoption of the patient summary by engaging standards organizations, and interoperability practitioners in a community of practice for digital health to share best practices, tools, data, specifications, and experiences. This paper compares operational aspects of patient summaries in 14 case studies in Europe, the United States, and across the world, focusing on how patient summary components are used in practice, to promote alignment and joint understanding that will improve quality of standards and lower costs of interoperability.

  16. Study on determination of durability analysis process and fatigue damage parameter for rubber component

    International Nuclear Information System (INIS)

    Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo

    2011-01-01

    Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed

  17. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  18. An application of principal component analysis to the clavicle and clavicle fixation devices.

    Science.gov (United States)

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-03-26

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  19. An application of principal component analysis to the clavicle and clavicle fixation devices

    Directory of Open Access Journals (Sweden)

    Fitzpatrick David

    2010-03-01

    Full Text Available Abstract Background Principal component analysis (PCA enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and methods Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. Results The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. Discussion And Conclusions This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  20. PV System Component Fault and Failure Compilation and Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    2018-02-01

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  1. INTEGRATION OF SYSTEM COMPONENTS AND UNCERTAINTY ANALYSIS - HANFORD EXAMPLES

    International Nuclear Information System (INIS)

    Wood, M.I.

    2009-01-01

    (sm b ullet) Deterministic 'One Off' analyses as basis for evaluating sensitivity and uncertainty relative to reference case (sm b ullet) Spatial coverage identical to reference case (sm b ullet) Two types of analysis assumptions - Minimax parameter values around reference case conditions - 'What If' cases that change reference case condition and associated parameter values (sm b ullet) No conclusions about likelihood of estimated result other than' qualitative expectation that actual outcome should tend toward reference case estimate

  2. Fluoride in the Serra Geral Aquifer System: Source Evaluation Using Stable Isotopes and Principal Component Analysis

    OpenAIRE

    Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando

    2013-01-01

    Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...

  3. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    Wasiolek, M.

    2000-01-01

    The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain

  4. Disruptive Event Biosphere Doser Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2000-12-28

    The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  5. Principle Component Analysis of AIRS and CrIS Data

    Science.gov (United States)

    Aumann, H. H.; Manning, Evan

    2015-01-01

    Synthetic Eigen Vectors (EV) used for the statistical analysis of the PC reconstruction residual of large ensembles of data are a novel tool for the analysis of data from hyperspectral infrared sounders like the Atmospheric Infrared Sounder (AIRS) on the EOS Aqua and the Cross-track Infrared Sounder (CrIS) on the SUOMI polar orbiting satellites. Unlike empirical EV, which are derived from the observed spectra, the synthetic EV are derived from a large ensemble of spectra which are calculated assuming that, given a state of the atmosphere, the spectra created by the instrument can be accurately calculated. The synthetic EV are then used to reconstruct the observed spectra. The analysis of the differences between the observed spectra and the reconstructed spectra for Simultaneous Nadir Overpasses of tropical oceans reveals unexpected differences at the more than 200 mK level under relatively clear conditions, particularly in the mid-wave water vapor channels of CrIS. The repeatability of these differences using independently trained SEV and results from different years appears to rule out inconsistencies in the radiative transfer algorithm or the data simulation. The reasons for these discrepancies are under evaluation.

  6. PWSCC Growth Assessment Model Considering Stress Triaxiality Factor for Primary Alloy 600 Components

    Directory of Open Access Journals (Sweden)

    Jong-Sung Kim

    2016-08-01

    Full Text Available We propose a primary water stress corrosion cracking (PWSCC initiation model of Alloy 600 that considers the stress triaxiality factor to apply to finite element analysis. We investigated the correlation between stress triaxiality effects and PWSCC growth behavior in cold-worked Alloy 600 stream generator tubes, and identified an additional stress triaxiality factor that can be added to Garud's PWSCC initiation model. By applying the proposed PWSCC initiation model considering the stress triaxiality factor, PWSCC growth simulations based on the macroscopic phenomenological damage mechanics approach were carried out on the PWSCC growth tests of various cold-worked Alloy 600 steam generator tubes and compact tension specimens. As a result, PWSCC growth behavior results from the finite element prediction are in good agreement with the experimental results.

  7. The transcription factor Mlc promotes Vibrio cholerae biofilm formation through repression of phosphotransferase system components.

    Science.gov (United States)

    Pickering, Bradley S; Lopilato, Jane E; Smith, Daniel R; Watnick, Paula I

    2014-07-01

    The phosphoenol phosphotransferase system (PTS) is a multicomponent signal transduction cascade that regulates diverse aspects of bacterial cellular physiology in response to the availability of high-energy sugars in the environment. Many PTS components are repressed at the transcriptional level when the substrates they transport are not available. In Escherichia coli, the transcription factor Mlc (for makes large colonies) represses transcription of the genes encoding enzyme I (EI), histidine protein (HPr), and the glucose-specific enzyme IIBC (EIIBC(Glc)) in defined media that lack PTS substrates. When glucose is present, the unphosphorylated form of EIIBC(Glc) sequesters Mlc to the cell membrane, preventing its interaction with DNA. Very little is known about Vibrio cholerae Mlc. We found that V. cholerae Mlc activates biofilm formation in LB broth but not in defined medium supplemented with either pyruvate or glucose. Therefore, we questioned whether V. cholerae Mlc functions differently than E. coli Mlc. Here we have shown that, like E. coli Mlc, V. cholerae Mlc represses transcription of PTS components in both defined medium and LB broth and that E. coli Mlc is able to rescue the biofilm defect of a V. cholerae Δmlc mutant. Furthermore, we provide evidence that Mlc indirectly activates transcription of the vps genes by repressing expression of EI. Because activation of the vps genes by Mlc occurs under only a subset of the conditions in which repression of PTS components is observed, we conclude that additional inputs present in LB broth are required for activation of vps gene transcription by Mlc. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  8. Hand function evaluation: a factor analysis study.

    Science.gov (United States)

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  9. Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...

    Indian Academy of Sciences (India)

    Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...

  10. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    Science.gov (United States)

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  11. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    Science.gov (United States)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  12. Path Analysis of Grain Yield and Yield Components and Some Agronomic Traits in Bread Wheat

    Directory of Open Access Journals (Sweden)

    Mohsen Janmohammadi

    2014-01-01

    Full Text Available Development of new bread wheat cultivars needs efficient tools to monitor trait association in a breeding program. This investigation was aimed to characterize grain yield components and some agronomic traits related to bread wheat grain yield. The efficiency of a breeding program depends mainly on the direction of the correlation between different traits and the relative importance of each component involved in contributing to grain yield. Correlation and path analysis were carried out in 56 bread wheat genotypes grown under field conditions of Maragheh, Iran. Observations were recorded on 18 wheat traits and correlation coefficient analysis revealed grain yield was positively correlated with stem diameter, spike length, floret number, spikelet number, grain diameter, grain length and 1000 seed weight traits. According to the variance inflation factor (VIF and tolerance as multicollinearity statistics, there are inconsistent relationships among the variables and all traits could be considered as first-order variables (Model I with grain yield as the response variable due to low multicollinearity of all measured traits. In the path coefficient analysis, grain yield represented the dependent variable and the spikelet number and 1000 seed weight traits were the independent ones. Our results indicated that the number of spikelets per spikes and leaf width and 1000 seed weight traits followed by the grain length, grain diameter and grain number per spike were the traits related to higher grain yield. The above mentioned traits along with their indirect causal factors should be considered simultaneously as an effective selection criteria evolving high yielding genotype because of their direct positive contribution to grain yield.

  13. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  14. Blind Component Separation in Wavelet Space: Application to CMB Analysis

    Directory of Open Access Journals (Sweden)

    J. Delabrouille

    2005-09-01

    Full Text Available It is a recurrent issue in astronomical data analysis that observations are incomplete maps with missing patches or intentionally masked parts. In addition, many astrophysical emissions are nonstationary processes over the sky. All these effects impair data processing techniques which work in the Fourier domain. Spectral matching ICA (SMICA is a source separation method based on spectral matching in Fourier space designed for the separation of diffuse astrophysical emissions in cosmic microwave background observations. This paper proposes an extension of SMICA to the wavelet domain and demonstrates the effectiveness of wavelet-based statistics for dealing with gaps in the data.

  15. A factor analysis to find critical success factors in retail brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in city of Tehran, Iran. Using a sample of 265 people from regular customers, the study uses factor analysis and extracts four main factors including related brand, product benefits, customer welfare strategy and corporate profits using the existing 31 factors in the literature.

  16. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  17. Experimental investigation of the factors influencing the polymer-polymer bond strength during two-component injection moulding

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Bondo, Martin

    2010-01-01

    Two-component injection moulding is a commercially important manufacturing process and a key technology for combining different material properties in a single plastic product. It is also one of most industrially adaptive process chain for manufacturing so-called moulded interconnect devices (MIDs......). Many fascinating applications of two-component or multi-component polymer parts are restricted due to the weak interfacial adhesion of the polymers. A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi-component polymer processing. This paper...... investigates the effects of the process conditions and geometrical factors on the bond strength of two-component polymer parts and identifies the factors which can effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength are also investigated...

  18. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  19. Integrating human factors into process hazard analysis

    International Nuclear Information System (INIS)

    Kariuki, S.G.; Loewe, K.

    2007-01-01

    A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors

  20. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  1. A Bayesian Analysis of Unobserved Component Models Using Ox

    Directory of Open Access Journals (Sweden)

    Charles S. Bos

    2011-05-01

    Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.

  2. Determination of inorganic component in plastics by neutron activation analysis

    International Nuclear Information System (INIS)

    Mateus, Sandra Fonseca; Saiki, Mitiko

    1995-01-01

    In order to identify possible sources of heavy metals in municipal solid waste incinerator ashes, plastic materials originated mainly from household waste were analyzed by using instrumental neutron activation analysis method. Plastic samples and synthetic standards of elements were irradiated at the IEA-R1 nuclear reactor for 8 h under thermal neutron flux of about 10 13 n cm -2 s -1 . After adequate decay time, counting were carried out using a hyperpure Ge detector and the concentrations of the elements As, Ba, Br, Cd, Co, Cr, Fe, Sb, Sc, Se, Sn, Ti and Zn were determined. For some samples, not all these elements were detected. Besides, the range of concentrations determined in similar type and colored samples varied from a few ppb to percentage. In general, colored and opaque plastic samples presented higher concentrations of the elements than those obtained from transparent and milky plastics. Precision of the results was also evaluated. (author). 3 refs., 2 tabs

  3. Competition analysis on the operating system market using principal component analysis

    Directory of Open Access Journals (Sweden)

    Brătucu, G.

    2011-01-01

    Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.

  4. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  5. Factors Influencing the Concentration of Certain Liposoluble Components in Cow and Goat Milk: A Review

    Directory of Open Access Journals (Sweden)

    Anamaria COZMA

    2014-09-01

    Full Text Available Milk fat contains a large number of fatty acids (FA and other liposoluble components that exhibit various effects on human health. The present article reviews some of the factors affecting FA, vitamin A and cholesterol concentrations in milk from dairy cow and goat. Milk fat composition is linked to many factors, both intrinsic (animal species, breed, lactation stage and extrinsic (environmental. The effect of animal species on milk fat composition is important, as reflected by higher concentrations of short- and medium-chain FA, vitamin A and cholesterol in goat than in cow milk. In a given ruminant species, the effects linked to breed are significant but limited and they can only be achieved over long periods of time. The lactation stage has an important effect on milk FA composition, mainly linked to body fat mobilisation in early lactation, but it only lasts a few weeks each year. Furthermore, changes in feeding have a marked influence on milk fat composition. Changing the forages in the diet of ruminants, pasture in particular, or supplementing lipids to the diet, represent an efficient mean to modify milk fat composition by decreasing saturated FA and cholesterol, and increasing cis-9,trans-11-CLA and vitamin A. Nutrition therefore constitutes a natural strategy to rapidly modulate milk FA, vitamin A and cholesterol composition, with the overall aim of improving the long-term health of consumers.

  6. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  7. Utilização da técnica por componentes principais (acp e fator de iluminação, no mapeamento da cultura do café em relevo montanhoso Coffee crop mapping using principal component analysis and illumination factor for complex relief

    Directory of Open Access Journals (Sweden)

    Rubens A. C. Lamparelli

    2011-06-01

    Full Text Available O objetivo deste trabalho foi avaliar as informações obtidas das imagens do satélite Landsat/TM5, utilizando técnicas de Análise por Componentes Principais (ACP e Fator de Iluminação oriundo de um Modelo de Elevação do Terreno, calculado a partir de imagens ASTER, no mapeamento de áreas de café em terreno montanhoso. As imagens utilizadas (três foram corrigidas para o efeito da atmosfera e cobriram, temporalmente, o ciclo da cultura. Foram calculadas as componentes principais e escolhidas as duas primeiras, as quais possuíam 94% das informações, para a definição das amostras. As amostras resultantes da ACP foram utilizadas na classificação supervisionada cujo resultado foi comparado com uma classificação convencional e uma classificação multitemporal convencional. A acurácia das classificações foi realizada por meio do cálculo da Exatidão Global e do Coeficiente Kappa, tendo como base uma máscara da área cafeeira da região. Os resultados mostraram que a técnica de ACP foi efetiva no estabelecimento de classes de iluminação, assim como na escolha das amostras, apesar de estas não terem representado a área efetivamente classificada. Em função disto, as classificações foram mais acuradas, principalmente aquela que considerou todos os pixels de cada imagem classificada individualmente pelo método da ACP, confirmando a importância do aspecto multitemporabilidade .The main goal of this study was to evaluate the information produced from Landsat/TM5 images using Principal Component Analysis (PCA and Illumination Factor built from Digital Elevation Model from ASTER images for coffee areas mapping in complex relief. Three Landsat images were used to monitor the crop cycle. The Principal Component Analysis was applied to the Landsat images and the two first components were chosen, responsible for 94% of the initial information, and used as a sample set for the supervised classification of those images. That

  8. Decoding the auditory brain with canonical component analysis.

    Science.gov (United States)

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Using principal component analysis for selecting network behavioral anomaly metrics

    Science.gov (United States)

    Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex

    2010-04-01

    This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.

  10. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    Science.gov (United States)

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  11. Summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.

    2004-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data

  12. Analysis of Factors Influencing Labour Supplied to Non-Farm Sub ...

    African Journals Online (AJOL)

    acer

    regression analysis reveal that educational level had negative coefficient, while occupation had positive coefficient ... component of the rural economy, its role in ... economic factors influencing labour ... Textbooks, Government publications,.

  13. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  14. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  15. Experimental investigation of the factors influencing the polymer-polymer bond strength during two component injection moulding

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2007-01-01

    Two component injection moulding is a commercially important manufacturing process and a key technology for Moulded Interconnect Devices (MIDs). Many fascinating applications of two component or multi component polymer parts are restricted due to the weak interfacial adhesion of the polymers...... effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength after moulding are also investigated. The material selections and environmental conditions were chosen based on the suitability of MID production, but the results and discussion presented....... A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi component polymer processing. This paper investigates the effects of the process and material parameters on the bond strength of two component polymer parts and identifies the factors which can...

  16. Factors associated with inadequate receipt of components and use of antenatal care services in Nigeria: a population-based study.

    Science.gov (United States)

    Agho, Kingsley E; Ezeh, Osita K; Ogbo, Felix A; Enoma, Anthony I; Raynes-Greenow, Camille

    2018-05-01

    Antenatal care (ANC) is an essential intervention to improve maternal and child health. In Nigeria, no population-based studies have investigated predictors of poor receipt of components and uptake of ANC at the national level to inform targeted maternal health initiatives. This study aimed to examine factors associated with inadequate receipt of components and use of ANC in Nigeria. The study used information on 20 405 singleton live-born infants of the mothers from the 2013 Nigeria Demographic and Health Survey. Multivariable logistic regression analyses that adjusted for cluster and survey weights were used to determine potential factors associated with inadequate receipt of components and use of ANC. The prevalence of underutilization and inadequate components of ANC were 47.5% (95% CI: 45.2 to 49.9) and 92.6% (95% CI: 91.8 to 93.2), respectively. Common risk factors for underutilization and inadequate components of ANC in Nigeria included residence in rural areas, no maternal education, maternal unemployment, long distance to health facilities and less maternal exposure to the media. Other risk factors for underutilization of ANC were home births and low household wealth. The study suggests that underutilization and inadequate receipt of the components of ANC were associated with amenable factors in Nigeria. Subsidized maternal services and well-guided health educational messages or financial support from the government will help to improve uptake of ANC services.

  17. PASCAL, Probabilistic Fracture Mechanics Analysis of Structural Components in Aging LWR

    International Nuclear Information System (INIS)

    Shibata, Katsuyuki; Onizawa, Kunio; Li, Yinsheng; Kato, Daisuke

    2005-01-01

    A - Description of program or function: PASCAL (PFM analysis of Structural Components in Aging LWR) is a PFM (Probabilistic Fracture Mechanics) code for evaluating the failure probability of aged pressure components. PASCAL has been developed as a part of the JAERI's research program on aging and structural integrity of LWR components, in order to respond to the increasing need of the probabilistic methodology in the regulation and inspection of nuclear components with the objective to provide a rational tool for the evaluation of the reliability and integrity of structural components. In order to improve the accuracy and reliability of the analysis code, some new fracture mechanics models or computational techniques are introduced considering the recent progress in the state of the art and performance of PC. Thus some new analysis models and original methodologies were introduced in PASCAL such as the elastic-plastic fracture criterion based on R6 method, a new crack extension model of semi-elliptical crack evaluation and so on. Moreover a function to evaluate the effect of embrittlement recovery by annealing of irradiated RPV is also introduced in the code based on the USNRC R.G. 1.162(1996). The code has been verified through various failure analysis results and international PTS round robin analysis ICAS which had been organized by the Principal Working Group 3 of OECD/NEA/CSNI. In order to attain a high usability, PASCAL Ver.1 with GUI provides an exclusive FEM pre-processor Pre-PASCAL for generating the input load transient data, a GUI system for generating the input data for PASCAL main processor of main solver and post-processor for output data. - Pre-PASCAL: Pre-PASCAL is an exclusive 3-D FEM pre-processor for generating the input transient data provided with 3 RPV mesh models and two simple specimen mesh models, i.e. CT and CCP. Almost the same input data format with that of PASCAL main processor is used. Output data of temperature and stress distribution

  18. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  19. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    Science.gov (United States)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  20. Multiobjective Optimization of ELID Grinding Process Using Grey Relational Analysis Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Prabhu

    2014-06-01

    Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.

  1. Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis

    Science.gov (United States)

    Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi

    A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.

  2. Information technology portfolio in supply chain management using factor analysis

    Directory of Open Access Journals (Sweden)

    Ahmad Jaafarnejad

    2013-11-01

    Full Text Available The adoption of information technology (IT along with supply chain management (SCM has become increasingly a necessity among most businesses. This enhances supply chain (SC performance and helps companies achieve the organizational competitiveness. IT systems capture and analyze information and enable management to make decisions by considering a global scope across the entire SC. This paper reviews the existing literature on IT in SCM and considers pertinent criteria. Using principal component analysis (PCA of factor analysis (FA, a number of related criteria are divided into smaller groups. Finally, SC managers can develop an IT portfolio in SCM using mean values of few extracted components on the relevance –emergency matrix. A numerical example is provided to explain details of the proposed method.

  3. Basic characteristics of plasma rich in growth factors (PRGF): blood cell components and biological effects.

    Science.gov (United States)

    Nishiyama, Kazuhiko; Okudera, Toshimitsu; Watanabe, Taisuke; Isobe, Kazushige; Suzuki, Masashi; Masuki, Hideo; Okudera, Hajime; Uematsu, Kohya; Nakata, Koh; Kawase, Tomoyuki

    2016-11-01

    Platelet-rich plasma (PRP) is widely used in regenerative medicine because of its high concentrations of various growth factors and platelets. However, the distribution of blood cell components has not been investigated in either PRP or other PRP derivatives. In this study, we focused on plasma rich in growth factors (PRGF), a PRP derivative, and analyzed the distributions of platelets and white blood cells (WBCs). Peripheral blood samples were collected from healthy volunteers ( N  = 14) and centrifuged to prepare PRGF and PRP. Blood cells were counted using an automated hematology analyzer. The effects of PRP and PRGF preparations on cell proliferation were determined using human periosteal cells. In the PRGF preparations, both red blood cells and WBCs were almost completely eliminated, and platelets were concentrated by 2.84-fold, whereas in the PRP preparations, both platelets and WBCs were similarly concentrated by 8.79- and 5.51-fold, respectively. Platelet counts in the PRGF preparations were positively correlated with platelet counts in the whole blood samples, while the platelet concentration rate was negatively correlated with red blood cell counts in the whole blood samples. In contrast, platelet counts and concentration rates in the PRP preparations were significantly influenced by WBC counts in whole blood samples. The PRP preparations, but not the PRGF preparations, significantly suppressed cell growth at higher doses in vitro. Therefore, these results suggest that PRGF preparations can clearly be distinguished from PRP preparations by both inclusion of WBCs and dose-dependent stimulation of periosteal cell proliferation in vitro.

  4. Thermal and Structural Analysis of Beamline Components in the Mu2e Experiment

    International Nuclear Information System (INIS)

    Martin, Luke Daniel

    2016-01-01

    Fermi National Accelerator Laboratory will be conducting the high energy particle physics experiment Muons to Electrons (Mu2e). In this experiment, physicists will attempt to witness and understand an ultra-rare process which is the conversion of a muon into the lighter mass electron, without creating additional neutrinos. The experiment is conducted by first generating a proton beam which will be collided into a target within the production solenoid (PS). This creates a high-intensity muon beam which passes through a transport solenoid (TS) and into the detector solenoid (DS). In the detector solenoid the muons will be stopped in an aluminum target and a series of detectors will measure the electrons produced. These components have been named the DS train since they are coupled and travel on a rail system when being inserted or extracted from the DS. To facilitate the installation and removal of the DS train, a set of external stands and a support stand for the instrumentation feed-through bulkhead (IFB) have been designed. Full analysis of safety factors and performance of these two designs has been completed. The detector solenoid itself will need to be maintained to a temperature of 22°C ± 10°C. This will minimize thermal strain and ensure the accurate position of the components is maintained to the tolerance of 2 mm. To reduce the thermal gradient, a passive heating system has been developed and reported.

  5. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  6. Communication failure: basic components, contributing factors, and the call for structure.

    Science.gov (United States)

    Dayton, Elizabeth; Henriksen, Kerm

    2007-01-01

    Communication is a taken-for-granted human activity that is recognized as important once it has failed. Communication failures are a major contributor to adverse events in health care. The components and processes of communication converge in an intricate manner, creating opportunities for misunderstanding along the way. When a patient's safety is at risk, providers should speak up (that is, initiate a message) to draw attention to the situation before harm is caused. They should also clearly explain (encode) and understand (decode) each other's diagnosis and recommendations to ensure well coordinated delivery of care. Beyond basic dyadic communication exchanges, an intricate web of individual, group, and organizational factors--more specifically, cognitive workload, implicit assumptions, authority gradients, diffusion of responsibility, and transitions of care--complicate communication. More structured and explicitly designed forms of communication have been recommended to reduce ambiguity, enhance clarity, and send an unequivocal signal, when needed, that a different action is required. Read-backs, Situation-Background-Assessment-Recommendation, critical assertions, briefings, and debriefings are seeing increasing use in health care. CODA: Although structured forms of communication have good potential to enhance clarity, they are not fail-safe. Providers need to be sensitive to unexpected consequences regarding their use.

  7. COMPARING INDEPENDENT COMPONENT ANALYSIS WITH PRINCIPLE COMPONENT ANALYSIS IN DETECTING ALTERATIONS OF PORPHYRY COPPER DEPOSIT (CASE STUDY: ARDESTAN AREA, CENTRAL IRAN

    Directory of Open Access Journals (Sweden)

    S. Mahmoudishadi

    2017-09-01

    Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  8. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    Science.gov (United States)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  9. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    Science.gov (United States)

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  10. Estimation of compound distribution in spectral images of tomatoes using independent component analysis

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.

    2003-01-01

    Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  11. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  12. Understanding Oral Reading Fluency among Adults with Low Literacy: Dominance Analysis of Contributing Component Skills

    Science.gov (United States)

    Mellard, Daryl F.; Anthony, Jason L.; Woods, Kari L.

    2012-01-01

    This study extends the literature on the component skills involved in oral reading fluency. Dominance analysis was applied to assess the relative importance of seven reading-related component skills in the prediction of the oral reading fluency of 272 adult literacy learners. The best predictors of oral reading fluency when text difficulty was…

  13. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    Science.gov (United States)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  14. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  15. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  16. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM

    Czech Academy of Sciences Publication Activity Database

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-01-01

    Roč. 45, č. 5 (2016), s. 443-461 ISSN 0175-7571 R&D Projects: GA ČR(CZ) GA13-02033S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:67985823 Keywords : 3D object segmentation * Delaunay algorithm * principal component analysis * 3D super-resolution microscopy * nucleoids * mitochondrial DNA replication Subject RIV: BO - Biophysics Impact factor: 1.472, year: 2016

  17. Cost analysis of small hydroelectric power plants components and preliminary estimation of global cost

    International Nuclear Information System (INIS)

    Basta, C.; Olive, W.J.; Antunes, J.S.

    1990-01-01

    An analysis of cost for each components of Small Hydroelectric Power Plant, taking into account the real costs of these projects is shown. It also presents a global equation which allows a preliminary estimation of cost for each construction. (author)

  18. Assessment of oil weathering by gas chromatography-mass spectrometry, time warping and principal component analysis

    DEFF Research Database (Denmark)

    Malmquist, Linus M.V.; Olsen, Rasmus R.; Hansen, Asger B.

    2007-01-01

    weathering state and to distinguish between various weathering processes is investigated and discussed. The method is based on comprehensive and objective chromatographic data processing followed by principal component analysis (PCA) of concatenated sections of gas chromatography–mass spectrometry...

  19. Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  20. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  1. Reviewing the musical component of rhythm of "poetry" and the factors influencing it

    Directory of Open Access Journals (Sweden)

    Ma’sumeh Ma’dankan

    2017-04-01

    Full Text Available ‘Rhythm’ is the most important component in the music of poetry. In this paper, in addition to defining rhythm, we have studied relative components which have most influence on the music of poetry. ‘Rhythm’ is the first common factor in different arts especially music and poetry. Poetry has always been along with rhythm. A short and complete definition of ‘rhythm’ is: “Rhythm is the balance resulting from sequence of letters or rhythms at certain limited times”.The most important factors affecting rhythm are: Proportion of syllables at prosodic rhythms: Every syllable has its special musical load at prosodic rhythms. It is clear that if each of them is mostly used at one rhythm, it will mostly and clearly show its own special state. Sequence of syllables at prosodic rhythms: Succession of long or short syllables because of their special vocal effect on rhythm is very effective on the musical quality of rhythm. Application of long syllable: The most the number of long syllables in a verse, the heavier will be the rhythm of the verse. Because in this way the number of syllables of every verse will be decreased and their temporal duration will be increased. Conformity of the end of words with the end of elements (space between words with space between elements: conformity of the end of words and elements because of the repeated sequence of an element highly strengthens the effect of that prosodic element at the mind of the listener. These constant and repeated scansions make the poem rhythmic and enrich its music. Making accidental or the second rhythm: one way for innovation and overcoming the natural music of a rhythm is making a special rhythm other than the main prosodic rhythm of poem by arranging the words in a special order in a way that it conforms to the other scansion of the same prosodic rhythm. Using regular space between words other than space of elements: sometimes the poet without using a different scansion of the

  2. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  3. Characterization of virulence factor regulation by SrrAB, a two-component system in Staphylococcus aureus.

    Science.gov (United States)

    Pragman, Alexa A; Yarwood, Jeremy M; Tripp, Timothy J; Schlievert, Patrick M

    2004-04-01

    Workers in our laboratory have previously identified the staphylococcal respiratory response AB (SrrAB), a Staphylococcus aureus two-component system that acts in the global regulation of virulence factors. This system down-regulates production of agr RNAIII, protein A, and toxic shock syndrome toxin 1 (TSST-1), particularly under low-oxygen conditions. In this study we investigated the localization and membrane orientation of SrrA and SrrB, transcription of the srrAB operon, the DNA-binding properties of SrrA, and the effect of SrrAB expression on S. aureus virulence. We found that SrrA is localized to the S. aureus cytoplasm, while SrrB is localized to the membrane and is properly oriented to function as a histidine kinase. srrAB has one transcriptional start site which results in either an srrA transcript or a full-length srrAB transcript; srrB must be cotranscribed with srrA. Gel shift assays of the agr P2, agr P3, protein A (spa), TSST-1 (tst), and srr promoters revealed SrrA binding at each of these promoters. Analysis of SrrAB-overexpressing strains by using the rabbit model of bacterial endocarditis demonstrated that overexpression of SrrAB decreased the virulence of the organisms compared to the virulence of isogenic strains that do not overexpress SrrAB. We concluded that SrrAB is properly localized and oriented to function as a two-component system. Overexpression of SrrAB, which represses agr RNAIII, TSST-1, and protein A in vitro, decreases virulence in the rabbit endocarditis model. Repression of these virulence factors is likely due to a direct interaction between SrrA and the agr, tst, and spa promoters.

  4. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    Science.gov (United States)

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  5. Factor C*, the specific initiation component of the mouse RNA polymerase I holoenzyme, is inactivated early in the transcription process.

    OpenAIRE

    Brun, R P; Ryan, K; Sollner-Webb, B

    1994-01-01

    Factor C* is the component of the RNA polymerase I holoenzyme (factor C) that allows specific transcriptional initiation on a factor D (SL1)- and UBF-activated rRNA gene promoter. The in vitro transcriptional capacity of a preincubated rDNA promoter complex becomes exhausted very rapidly upon initiation of transcription. This is due to the rapid depletion of C* activity. In contrast, C* activity is not unstable in the absence of transcription, even in the presence of nucleoside triphosphates ...

  6. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  7. Power Transformer Differential Protection Based on Neural Network Principal Component Analysis, Harmonic Restraint and Park's Plots

    OpenAIRE

    Tripathy, Manoj

    2012-01-01

    This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...

  8. Sensitivity analysis on the component cooling system of the Angra 1 NPP

    International Nuclear Information System (INIS)

    Castro Silva, Luiz Euripedes Massiere de

    1995-01-01

    The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs

  9. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

    OpenAIRE

    Geroukis, Asterios; Brorson, Erik

    2014-01-01

    In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

  10. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  11. Time-domain ultra-wideband radar, sensor and components theory, analysis and design

    CERN Document Server

    Nguyen, Cam

    2014-01-01

    This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...

  12. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  13. Comparison of cluster and principal component analysis techniques to derive dietary patterns in Irish adults.

    Science.gov (United States)

    Hearty, Aine P; Gibney, Michael J

    2009-02-01

    The aims of the present study were to examine and compare dietary patterns in adults using cluster and factor analyses and to examine the format of the dietary variables on the pattern solutions (i.e. expressed as grams/day (g/d) of each food group or as the percentage contribution to total energy intake). Food intake data were derived from the North/South Ireland Food Consumption Survey 1997-9, which was a randomised cross-sectional study of 7 d recorded food and nutrient intakes of a representative sample of 1379 Irish adults aged 18-64 years. Cluster analysis was performed using the k-means algorithm and principal component analysis (PCA) was used to extract dietary factors. Food data were reduced to thirty-three food groups. For cluster analysis, the most suitable format of the food-group variable was found to be the percentage contribution to energy intake, which produced six clusters: 'Traditional Irish'; 'Continental'; 'Unhealthy foods'; 'Light-meal foods & low-fat milk'; 'Healthy foods'; 'Wholemeal bread & desserts'. For PCA, food groups in the format of g/d were found to be the most suitable format, and this revealed four dietary patterns: 'Unhealthy foods & high alcohol'; 'Traditional Irish'; 'Healthy foods'; 'Sweet convenience foods & low alcohol'. In summary, cluster and PCA identified similar dietary patterns when presented with the same dataset. However, the two dietary pattern methods required a different format of the food-group variable, and the most appropriate format of the input variable should be considered in future studies.

  14. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  15. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  16. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Bork, Lasse

    2017-01-01

    We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...... movements in housing prices. We find that (S)PLS models systematically dominate PCA models. (S)PLS models also generate significant out-of-sample predictive power over and above the predictive power contained by the price-rent ratio, autoregressive benchmarks, and regression models based on small datasets....

  17. Principal component analysis of tomato genotypes based on some morphological and biochemical quality indicators

    Directory of Open Access Journals (Sweden)

    Glogovac Svetlana

    2012-01-01

    Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.

  18. The role of damage analysis in the assessment of service-exposed components

    International Nuclear Information System (INIS)

    Bendick, W.; Muesch, H.; Weber, H.

    1987-01-01

    Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)

  19. Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M

    2016-10-03

    Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.

  20. Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Liying Wang

    2017-01-01

    Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.

  1. Estimation of Leakage Ratio Using Principal Component Analysis and Artificial Neural Network in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Dongwoo Jang

    2018-03-01

    Full Text Available Leaks in a water distribution network (WDS constitute losses of water supply caused by pipeline failure, operational loss, and physical factors. This has raised the need for studies on the factors affecting the leakage ratio and estimation of leakage volume in a water supply system. In this study, principal component analysis (PCA and artificial neural network (ANN were used to estimate the volume of water leakage in a WDS. For the study, six main effective parameters were selected and standardized data obtained through the Z-score method. The PCA-ANN model was devised and the leakage ratio was estimated. An accuracy assessment was performed to compare the measured leakage ratio to that of the simulated model. The results showed that the PCA-ANN method was more accurate for estimating the leakage ratio than a single ANN simulation. In addition, the estimation results differed according to the number of neurons in the ANN model’s hidden layers. In this study, an ANN with multiple hidden layers was found to be the best method for estimating the leakage ratio with 12–12 neurons. This suggested approaches to improve the accuracy of leakage ratio estimation, as well as a scientific approach toward the sustainable management of water distribution systems.

  2. Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis

    Science.gov (United States)

    Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John

    2015-11-01

    The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.

  3. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  4. Personality, tobacco consumption, physical inactivity, obesity markers, and metabolic components as risk factors for cardiovascular disease in the general population.

    Science.gov (United States)

    Pocnet, Cornelia; Antonietti, Jean-Philippe; Strippoli, Marie-Pierre F; Glaus, Jennifer; Rossier, Jérôme; Preisig, Martin

    2017-09-01

    The aim of this study was to investigate the relationship between personality traits, tobacco consumption, physical inactivity, obesity markers and metabolic components as cardiovascular risk factors (CVRFs). A total of 2543 participants from the general population (CoLaus|PsyCoLaus) had provided complete information on physical health and unhealthy behaviors and completed the Revised NEO Five-Factor Inventory. Our results show a strong cross-correlation between obesity markers and metabolic components suggesting that their combination could represent an important CVRF. Moreover, socio-demographic characteristics, tobacco consumption, and physical inactivity were associated with both obesity markers and metabolic components latent traits. The conscientiousness personality trait was significantly associated with obesity markers, but played a modest role. Indeed, higher conscientiousness was associated with lower level of obesity indicators. However, no link between personality and metabolic components were found. In sum, our data suggest that health related behaviours have more effect on the development of cardiovascular diseases than personality traits.

  5. Factor analysis for exercise stress radionuclide ventriculography

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu

    1987-01-01

    Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)

  6. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    Science.gov (United States)

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  7. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1980-01-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)

  8. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  9. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1979-06-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt

  10. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  11. Identification of components of fibroadenoma in cytology preparations using texture analysis: a morphometric study.

    Science.gov (United States)

    Singh, S; Gupta, R

    2012-06-01

    To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.

  12. Automatic flow analysis of digital subtraction angiography using independent component analysis in patients with carotid stenosis.

    Directory of Open Access Journals (Sweden)

    Han-Jui Lee

    Full Text Available Current time-density curve analysis of digital subtraction angiography (DSA provides intravascular flow information but requires manual vasculature selection. We developed an angiographic marker that represents cerebral perfusion by using automatic independent component analysis.We retrospectively analyzed the data of 44 patients with unilateral carotid stenosis higher than 70% according to North American Symptomatic Carotid Endarterectomy Trial criteria. For all patients, magnetic resonance perfusion (MRP was performed one day before DSA. Fixed contrast injection protocols and DSA acquisition parameters were used before stenting. The cerebral circulation time (CCT was defined as the difference in the time to peak between the parietal vein and cavernous internal carotid artery in a lateral angiogram. Both anterior-posterior and lateral DSA views were processed using independent component analysis, and the capillary angiogram was extracted automatically. The full width at half maximum of the time-density curve in the capillary phase in the anterior-posterior and lateral DSA views was defined as the angiographic mean transient time (aMTT; i.e., aMTTAP and aMTTLat. The correlations between the degree of stenosis, CCT, aMTTAP and aMTTLat, and MRP parameters were evaluated.The degree of stenosis showed no correlation with CCT, aMTTAP, aMTTLat, or any MRP parameter. CCT showed a strong correlation with aMTTAP (r = 0.67 and aMTTLat (r = 0.72. Among the MRP parameters, CCT showed only a moderate correlation with MTT (r = 0.67 and Tmax (r = 0.40. aMTTAP showed a moderate correlation with Tmax (r = 0.42 and a strong correlation with MTT (r = 0.77. aMTTLat also showed similar correlations with Tmax (r = 0.59 and MTT (r = 0.73.Apart from vascular anatomy, aMTT estimates brain parenchyma hemodynamics from DSA and is concordant with MRP. This process is completely automatic and provides immediate measurement of quantitative peritherapeutic brain parenchyma

  13. Performance assessment of air quality monitoring networks using principal component analysis and cluster analysis

    International Nuclear Information System (INIS)

    Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun

    2011-01-01

    This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)

  14. Setting Component Priorities in Protecting NPPs against Cyber-Attacks Using Reliability Analysis Techniques

    International Nuclear Information System (INIS)

    Choi, Moon Kyoung; Seong, Poong Hyun; Son, Han Seong

    2017-01-01

    The digitalization of infrastructure makes systems vulnerable to cyber threats and hybrid attacks. According to ICS-CERT report, as time goes by, the number of vulnerabilities in ICS industries increases rapidly. Digital I and C systems have been developed and installed in nuclear power plants, and due to installation of the digital I and C systems, cyber security concerns are increasing in nuclear industry. However, there are too many critical digital assets to be inspected in digitalized NPPs. In order to reduce the inefficiency of regulation in nuclear facilities, the critical components that are directly related to an accident are elicited by using the reliability analysis techniques. Target initial events are selected, and their headings are analyzed through event tree analysis about whether the headings can be affected by cyber-attacks or not. Among the headings, the headings that can be proceeded directly to the core damage by the cyber-attack when they are fail are finally selected as the target of deriving the minimum cut-sets. We analyze the fault trees and derive the minimum set-cuts. In terms of original PSA, the value of probability for the cut-sets is important but the probability is not important in terms of cyber security of NPPs. The important factors is the number of basic events consisting of the minimal cut-sets that is proportional to vulnerability.

  15. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  16. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  17. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  18. Signal extraction and wave field separation in tunnel seismic prediction by independent component analysis

    Science.gov (United States)

    Yue, Y.; Jiang, T.; Zhou, Q.

    2017-12-01

    In order to ensure the rationality and the safety of tunnel excavation, the advanced geological prediction has been become an indispensable step in tunneling. However, the extraction of signal and the separation of P and S waves directly influence the accuracy of geological prediction. Generally, the raw data collected in TSP system is low quality because of the numerous disturb factors in tunnel projects, such as the power interference and machine vibration interference. It's difficult for traditional method (band-pass filtering) to remove interference effectively as well as bring little loss to signal. The power interference, machine vibration interference and the signal are original variables and x, y, z component as observation signals, each component of the representation is a linear combination of the original variables, which satisfy applicable conditions of independent component analysis (ICA). We perform finite-difference simulations of elastic wave propagation to synthetic a tunnel seismic reflection record. The method of ICA was adopted to process the three-component data, and the results show that extract the estimates of signal and the signals are highly correlated (the coefficient correlation is up to more than 0.93). In addition, the estimates of interference that separated from ICA and the interference signals are also highly correlated, and the coefficient correlation is up to more than 0.99. Thus, simulation results showed that the ICA is an ideal method for extracting high quality data from mixed signals. For the separation of P and S waves, the conventional separation techniques are based on physical characteristics of wave propagation, which require knowledge of the near-surface P and S waves velocities and density. Whereas the ICA approach is entirely based on statistical differences between P and S waves, and the statistical technique does not require a priori information. The concrete results of the wave field separation will be presented in

  19. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  20. Confirmatory factor analysis using Microsoft Excel.

    Science.gov (United States)

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  1. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  2. Application of SWAT99.2 to sensitivity analysis of water balance components in unique plots in a hilly region

    Directory of Open Access Journals (Sweden)

    Jun-feng Dai

    2017-07-01

    Full Text Available Although many sensitivity analyses using the soil and water assessment tool (SWAT in a complex watershed have been conducted, little attention has been paid to the application potential of the model in unique plots. In addition, sensitivity analysis of percolation and evapotranspiration with SWAT has seldom been undertaken. In this study, SWAT99.2 was calibrated to simulate water balance components for unique plots in Southern China from 2000 to 2001, which included surface runoff, percolation, and evapotranspiration. Twenty-one parameters classified into four categories, including meteorological conditions, topographical characteristics, soil properties, and vegetation attributes, were used for sensitivity analysis through one-at-a-time (OAT sampling to identify the factor that contributed most to the variance in water balance components. The results were shown to be different for different plots, with parameter sensitivity indices and ranks varying for different water balance components. Water balance components in the broad-leaved forest and natural grass plots were most sensitive to meteorological conditions, less sensitive to vegetation attributes and soil properties, and least sensitive to topographical characteristics. Compared to those in the natural grass plot, water balance components in the broad-leaved forest plot demonstrated higher sensitivity to the maximum stomatal conductance (GSI and maximum leaf area index (BLAI.

  3. Discriminatory components retracing strategy for monitoring the preparation procedure of Chinese patent medicines by fingerprint and chemometric analysis.

    Directory of Open Access Journals (Sweden)

    Shuai Yao

    Full Text Available Chinese patent medicines (CPM, generally prepared from several traditional Chinese medicines (TCMs in accordance with specific process, are the typical delivery form of TCMs in Asia. To date, quality control of CPMs has typically focused on the evaluation of the final products using fingerprint technique and multi-components quantification, but rarely on monitoring the whole preparation process, which was considered to be more important to ensure the quality of CPMs. In this study, a novel and effective strategy labeling "retracing" way based on HPLC fingerprint and chemometric analysis was proposed with Shenkang injection (SKI serving as an example to achieve the quality control of the whole preparation process. The chemical fingerprints were established initially and then analyzed by similarity, principal component analysis (PCA and partial least squares-discriminant analysis (PLS-DA to evaluate the quality and to explore discriminatory components. As a result, the holistic inconsistencies of ninety-three batches of SKIs were identified and five discriminatory components including emodic acid, gallic acid, caffeic acid, chrysophanol-O-glucoside, and p-coumaroyl-O-galloyl-glucose were labeled as the representative targets to explain the retracing strategy. Through analysis of the targets variation in the corresponding semi-products (ninety-three batches, intermediates (thirty-three batches, and the raw materials, successively, the origins of the discriminatory components were determined and some crucial influencing factors were proposed including the raw materials, the coextraction temperature, the sterilizing conditions, and so on. Meanwhile, a reference fingerprint was established and subsequently applied to the guidance of manufacturing. It was suggested that the production process should be standardized by taking the concentration of the discriminatory components as the diagnostic marker to ensure the stable and consistent quality for multi

  4. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    International Nuclear Information System (INIS)

    Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.

    2012-01-01

    We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  5. Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation

    Directory of Open Access Journals (Sweden)

    Chunbo Yang

    2015-01-01

    Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.

  6. Is low self-esteem a risk factor for depression among adolescents? an analytical study with interventional component

    Directory of Open Access Journals (Sweden)

    Jayanthi P, Rajamanickam Rajkumar

    2014-07-01

    Full Text Available Background: Self – esteem is an important factor for helping persons deal with life stressors. It is an important determinant of psychological well-being that is particularly problematic during an adolescent life stage. Low self-esteem might contribute to depression through both interpersonal and intrapersonal pathways. Many theories of depression postulate that low self esteem is a defining feature of depression. Aims: Self-esteem in adolescents has been associated with a number of risk and protective factors in previous studies. This study examined the relationship between low self esteem and depression among adolescents. Methods: This study used a case control (retrospective design. Samples of 1120 adolescents, aged 14-17 years were selected for the study. Screening was done by using MINI-KID and the level of depression was assessed by using Beck depression inventory. Self esteem was measured by Rosenberg self esteem scale. Odds Ratio and Multivariate logistic regression were used to examine the relation between self-esteem and socio-demographic variables. Results: The odds ratio analysis revealed that adolescents who had low self esteem found to have 3.7 times (95% CI=1.9-6.9 and p- value 0.001 more risk of developing depression than the adolescents who had high self esteem. Conclusions: The findings implied that low self-esteem is a risk factor for depression among adolescents. Adolescents with low self esteem have to be identified earlier and prompt interventions will prevent future psychiatric illnesses. As an intervention towards the educational component pamphlet was distributed to the adolescents, parents and teachers. A concept programme called “Self Esteem Education & Development – SEED” programme, is planned for, from High school level.

  7. Morphological evaluation of common bean diversity in Bosnia and Herzegovina using the discriminant analysis of principal components (DAPC multivariate method

    Directory of Open Access Journals (Sweden)

    Grahić Jasmin

    2013-01-01

    Full Text Available In order to analyze morphological characteristics of locally cultivated common bean landraces from Bosnia and Herzegovina (B&H, thirteen quantitative and qualitative traits of 40 P. vulgaris accessions, collected from four geographical regions (Northwest B&H, Northeast B&H, Central B&H and Sarajevo and maintained at the Gene bank of the Faculty of Agriculture and Food Sciences in Sarajevo, were examined. Principal component analysis (PCA showed that the proportion of variance retained in the first two principal components was 54.35%. The first principal component had high contributing factor loadings from seed width, seed height and seed weight, whilst the second principal component had high contributing factor loadings from the analyzed traits seed per pod and pod length. PCA plot, based on the first two principal components, displayed a high level of variability among the analyzed material. The discriminant analysis of principal components (DAPC created 3 discriminant functions (DF, whereby the first two discriminant functions accounted for 90.4% of the variance retained. Based on the retained DFs, DAPC provided group membership probabilities which showed that 70% of the accessions examined were correctly classified between the geographically defined groups. Based on the taxonomic distance, 40 common bean accessions analyzed in this study formed two major clusters, whereas two accessions Acc304 and Acc307 didn’t group in any of those. Acc360 and Acc362, as well as Acc324 and Acc371 displayed a high level of similarity and are probably the same landrace. The present diversity of Bosnia and Herzegovina’s common been landraces could be useful in future breeding programs.

  8. DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic

  9. Common cause failure data collection and analysis for safety-related components of TRIGA SSR-14MW Pitesti, Romania

    International Nuclear Information System (INIS)

    Radu, G.; Mladin, D.

    2003-01-01

    This paper presents a study performed on the set of common cause failures (CCF) of safety-related components of the research reactor TRIGA SSR-14 MW Pitesti. The data collected cover a period of 20 years, from 1979 to 2000. The sources of data are Shift Supervisor Reports, Work Authorizations, and Reactor Log Books. Events collected are analyzed by failure mode and degrees of failure. Qualitative analysis of root causes, coupling factors and corrective actions and quantitative analysis of CCF events are studied. The objective of this work is to develop qualitative insights in the nature of the reported events and to build a site-specific common cause events database. (author)

  10. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  11. The derivative assay--an analysis of two fast components of DNA rejoining kinetics

    International Nuclear Information System (INIS)

    Sandstroem, B.E.

    1989-01-01

    The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively

  12. Visualizing solvent mediated phase transformation behavior of carbamazepine polymorphs by principal component analysis

    DEFF Research Database (Denmark)

    Tian, Fang; Rades, Thomas; Sandler, Niklas

    2008-01-01

    The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...

  13. Identification of Counterfeit Alcoholic Beverages Using Cluster Analysis in Principal-Component Space

    Science.gov (United States)

    Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.

    2017-07-01

    A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.

  14. A review of the reliability analysis of LPRS including the components repairs

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.

    1983-01-01

    The reliability analysis of low pressure recirculation system in its long-term recicurlation phase before 24hs is presented. The possibility of repairing the components out of the containment is included. A general revision of analysis of the short-term recirculation phase is done. (author) [pt

  15. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    Science.gov (United States)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  16. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    Science.gov (United States)

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  17. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  18. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  19. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  20. Path and correlation analysis of perennial ryegrass (Lolium perenne L.) seed yield components

    DEFF Research Database (Denmark)

    Abel, Simon; Gislum, René; Boelt, Birte

    2017-01-01

    Maximum perennial ryegrass seed production potential is substantially greater than harvested yields with harvested yields representing only 20% of calculated potential. Similar to wheat, maize and other agriculturally important crops, seed yield is highly dependent on a number of interacting seed...... yield components. This research was performed to apply and describe path analysis of perennial ryegrass seed yield components in relation to harvested seed yields. Utilising extensive yield components which included subdividing reproductive inflorescences into five size categories, path analysis...... was undertaken assuming a unidirectional causal-admissible relationship between seed yield components and harvested seed yield in six commercial seed production fields. Both spikelets per inflorescence and florets per spikelet had a significant (p seed yield; however, total...