WorldWideScience

Sample records for component factor analysis

  1. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  2. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  3. Influencing Factors of Catering and Food Service Industry Based on Principal Component Analysis

    OpenAIRE

    Zi Tang

    2014-01-01

    Scientific analysis of influencing factors is of great importance for the healthy development of catering and food service industry. This study attempts to present a set of critical indicators for evaluating the contribution of influencing factors to catering and food service industry in the particular context of Harbin City, Northeast China. Ten indicators that correlate closely with catering and food service industry were identified and performed by the principal component analysis method u...

  4. Towards automatic analysis of dynamic radionuclide studies using principal-components factor analysis

    International Nuclear Information System (INIS)

    Nigran, K.S.; Barber, D.C.

    1985-01-01

    A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)

  5. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  6. A feasibility study on age-related factors of wrist pulse using principal component analysis.

    Science.gov (United States)

    Jang-Han Bae; Young Ju Jeon; Sanghun Lee; Jaeuk U Kim

    2016-08-01

    Various analysis methods for examining wrist pulse characteristics are needed for accurate pulse diagnosis. In this feasibility study, principal component analysis (PCA) was performed to observe age-related factors of wrist pulse from various analysis parameters. Forty subjects in the age group of 20s and 40s were participated, and their wrist pulse signal and respiration signal were acquired with the pulse tonometric device. After pre-processing of the signals, twenty analysis parameters which have been regarded as values reflecting pulse characteristics were calculated and PCA was performed. As a results, we could reduce complex parameters to lower dimension and age-related factors of wrist pulse were observed by combining-new analysis parameter derived from PCA. These results demonstrate that PCA can be useful tool for analyzing wrist pulse signal.

  7. Clustering of metabolic and cardiovascular risk factors in the polycystic ovary syndrome: a principal component analysis.

    Science.gov (United States)

    Stuckey, Bronwyn G A; Opie, Nicole; Cussons, Andrea J; Watts, Gerald F; Burke, Valerie

    2014-08-01

    Polycystic ovary syndrome (PCOS) is a prevalent condition with heterogeneity of clinical features and cardiovascular risk factors that implies multiple aetiological factors and possible outcomes. To reduce a set of correlated variables to a smaller number of uncorrelated and interpretable factors that may delineate subgroups within PCOS or suggest pathogenetic mechanisms. We used principal component analysis (PCA) to examine the endocrine and cardiometabolic variables associated with PCOS defined by the National Institutes of Health (NIH) criteria. Data were retrieved from the database of a single clinical endocrinologist. We included women with PCOS (N = 378) who were not taking the oral contraceptive pill or other sex hormones, lipid lowering medication, metformin or other medication that could influence the variables of interest. PCA was performed retaining those factors with eigenvalues of at least 1.0. Varimax rotation was used to produce interpretable factors. We identified three principal components. In component 1, the dominant variables were homeostatic model assessment (HOMA) index, body mass index (BMI), high density lipoprotein (HDL) cholesterol and sex hormone binding globulin (SHBG); in component 2, systolic blood pressure, low density lipoprotein (LDL) cholesterol and triglycerides; in component 3, total testosterone and LH/FSH ratio. These components explained 37%, 13% and 11% of the variance in the PCOS cohort respectively. Multiple correlated variables from patients with PCOS can be reduced to three uncorrelated components characterised by insulin resistance, dyslipidaemia/hypertension or hyperandrogenaemia. Clustering of risk factors is consistent with different pathogenetic pathways within PCOS and/or differing cardiometabolic outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. PRINCIPAL COMPONENT ANALYSIS OF FACTORS DETERMINING PHOSPHATE ROCK DISSOLUTION ON ACID SOILS

    Directory of Open Access Journals (Sweden)

    Yusdar Hilman

    2016-10-01

    Full Text Available Many of the agricultural soils in Indonesia are acidic and low in both total and available phosphorus which severely limits their potential for crops production. These problems can be corrected by application of chemical fertilizers. However, these fertilizers are expensive, and cheaper alternatives such as phosphate rock (PR have been considered. Several soil factors may influence the dissolution of PR in soils, including both chemical and physical properties. The study aimed to identify PR dissolution factors and evaluate their relative magnitude. The experiment was conducted in Soil Chemical Laboratory, Universiti Putra Malaysia and Indonesian Center for Agricultural Land Resources Research and Development from January to April 2002. The principal component analysis (PCA was used to characterize acid soils in an incubation system into a number of factors that may affect PR dissolution. Three major factors selected were soil texture, soil acidity, and fertilization. Using the scores of individual factors as independent variables, stepwise regression analysis was performed to derive a PR dissolution function. The factors influencing PR dissolution in order of importance were soil texture, soil acidity, then fertilization. Soil texture factors including clay content and organic C, and soil acidity factor such as P retention capacity interacted positively with P dissolution and promoted PR dissolution effectively. Soil texture factors, such as sand and silt content, soil acidity factors such as pH, and exchangeable Ca decreased PR dissolution.

  9. Clinical usefulness of physiological components obtained by factor analysis. Application to /sup 99m/Tc-DTPA renography

    Energy Technology Data Exchange (ETDEWEB)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in /sup 99m/Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: (1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. (2) The change of a sick condition could be morphologically evaluated as that of the functional image.

  10. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  11. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Zhang YJ

    2018-05-01

    Full Text Available Yuji Zhang,* Xiaoju Li,* Lu Mao, Mei Zhang, Ke Li, Yinxia Zheng, Wangfei Cui, Hongpo Yin, Yanli He, Mingxia Jing Department of Public Health, Shihezi University School of Medicine, Shihezi, Xinjiang, China *These authors contributed equally to this work Purpose: The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis.Patients and methods: A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ2-test and a binary logistic regression model.Results: This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications.Conclusion: Community management plays an important role in improving the patients’ medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers. Keywords: hypertension, medication adherence, factors, principal component analysis, community management, China

  12. Clustering of leptin and physical activity with components of metabolic syndrome in Iranian population: an exploratory factor analysis.

    Science.gov (United States)

    Esteghamati, Alireza; Zandieh, Ali; Khalilzadeh, Omid; Morteza, Afsaneh; Meysamie, Alipasha; Nakhjavani, Manouchehr; Gouya, Mohammad Mehdi

    2010-10-01

    Metabolic syndrome (MetS), manifested by insulin resistance, dyslipidemia, central obesity, and hypertension, is conceived to be associated with hyperleptinemia and physical activity. The aim of this study was to elucidate the factors underlying components of MetS and also to test the suitability of leptin and physical activity as additional components of this syndrome. Data of the individuals without history of diabetes mellitus, aged 25-64 years, from third national surveillance of risk factors of non-communicable diseases (SuRFNCD-2007), were analyzed. Performing factor analysis on waist circumference, homeostasis model assessment of insulin resistance, systolic blood pressure, triglycerides (TG) and high-density lipoprotein cholesterol (HDL-C) led to extraction of two factors which explained around 59.0% of the total variance in both genders. When TG and HDL-C were replaced by TG to HDL-C ratio, a single factor was obtained. In contrast to physical activity, addition of leptin was consistent with one-factor structure of MetS and improved the ability of suggested models to identify obesity (BMI≥30 kg/m2, Pphysical activity loaded on the first identified factor. Our study shows that one underlying factor structure of MetS is also plausible and the inclusion of leptin does not interfere with this structure. Further, this study suggests that physical activity influences MetS components via modulation of the main underlying pathophysiologic pathway of this syndrome.

  13. Factors affecting medication adherence in community-managed patients with hypertension based on the principal component analysis: evidence from Xinjiang, China.

    Science.gov (United States)

    Zhang, Yuji; Li, Xiaoju; Mao, Lu; Zhang, Mei; Li, Ke; Zheng, Yinxia; Cui, Wangfei; Yin, Hongpo; He, Yanli; Jing, Mingxia

    2018-01-01

    The analysis of factors affecting the nonadherence to antihypertensive medications is important in the control of blood pressure among patients with hypertension. The purpose of this study was to assess the relationship between factors and medication adherence in Xinjiang community-managed patients with hypertension based on the principal component analysis. A total of 1,916 community-managed patients with hypertension, selected randomly through a multi-stage sampling, participated in the survey. Self-designed questionnaires were used to classify the participants as either adherent or nonadherent to their medication regimen. A principal component analysis was used in order to eliminate the correlation between factors. Factors related to nonadherence were analyzed by using a χ 2 -test and a binary logistic regression model. This study extracted nine common factors, with a cumulative variance contribution rate of 63.6%. Further analysis revealed that the following variables were significantly related to nonadherence: severity of disease, community management, diabetes, and taking traditional medications. Community management plays an important role in improving the patients' medication-taking behavior. Regular medication regimen instruction and better community management services through community-level have the potential to reduce nonadherence. Mild hypertensive patients should be monitored by community health care providers.

  14. Factor structure underlying components of allostatic load.

    Directory of Open Access Journals (Sweden)

    Jeanne M McCaffery

    Full Text Available Allostatic load is a commonly used metric of health risk based on the hypothesis that recurrent exposure to environmental demands (e.g., stress engenders a progressive dysregulation of multiple physiological systems. Prominent indicators of response to environmental challenges, such as stress-related hormones, sympatho-vagal balance, or inflammatory cytokines, comprise primary allostatic mediators. Secondary mediators reflect ensuing biological alterations that accumulate over time and confer risk for clinical disease but overlap substantially with a second metric of health risk, the metabolic syndrome. Whether allostatic load mediators covary and thus warrant treatment as a unitary construct remains to be established and, in particular, the relation of allostatic load parameters to the metabolic syndrome requires elucidation. Here, we employ confirmatory factor analysis to test: 1 whether a single common factor underlies variation in physiological systems associated with allostatic load; and 2 whether allostatic load parameters continue to load on a single common factor if a second factor representing the metabolic syndrome is also modeled. Participants were 645 adults from Allegheny County, PA (30-54 years old, 82% non-Hispanic white, 52% female who were free of confounding medications. Model fitting supported a single, second-order factor underlying variance in the allostatic load components available in this study (metabolic, inflammatory and vagal measures. Further, this common factor reflecting covariation among allostatic load components persisted when a latent factor representing metabolic syndrome facets was conjointly modeled. Overall, this study provides novel evidence that the modeled allostatic load components do share common variance as hypothesized. Moreover, the common variance suggests the existence of statistical coherence above and beyond that attributable to the metabolic syndrome.

  15. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  16. Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data

    Directory of Open Access Journals (Sweden)

    I. M. Ulbrich

    2009-05-01

    Full Text Available The organic aerosol (OA dataset from an Aerodyne Aerosol Mass Spectrometer (Q-AMS collected at the Pittsburgh Air Quality Study (PAQS in September 2002 was analyzed with Positive Matrix Factorization (PMF. Three components – hydrocarbon-like organic aerosol OA (HOA, a highly-oxygenated OA (OOA-1 that correlates well with sulfate, and a less-oxygenated, semi-volatile OA (OOA-2 that correlates well with nitrate and chloride – are identified and interpreted as primary combustion emissions, aged SOA, and semivolatile, less aged SOA, respectively. The complexity of interpreting the PMF solutions of unit mass resolution (UMR AMS data is illustrated by a detailed analysis of the solutions as a function of number of components and rotational forcing. A public web-based database of AMS spectra has been created to aid this type of analysis. Realistic synthetic data is also used to characterize the behavior of PMF for choosing the best number of factors, and evaluating the rotations of non-unique solutions. The ambient and synthetic data indicate that the variation of the PMF quality of fit parameter (Q, a normalized chi-squared metric vs. number of factors in the solution is useful to identify the minimum number of factors, but more detailed analysis and interpretation are needed to choose the best number of factors. The maximum value of the rotational matrix is not useful for determining the best number of factors. In synthetic datasets, factors are "split" into two or more components when solving for more factors than were used in the input. Elements of the "splitting" behavior are observed in solutions of real datasets with several factors. Significant structure remains in the residual of the real dataset after physically-meaningful factors have been assigned and an unrealistic number of factors would be required to explain the remaining variance. This residual structure appears to be due to variability in the spectra of the components

  17. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  18. Long- and Short-Run Components of Factor Betas: Implications for Equity Pricing

    DEFF Research Database (Denmark)

    Asgharian, Hossein; Christiansen, Charlotte; Hou, Ai Jun

    We suggest a bivariate component GARCH model that simultaneously obtains factor betas’ long- and short-run components. We apply this new model to industry portfolios using market, small-minus-big, and high-minus-low portfolios as risk factors and find that the cross-sectional average and dispersion...... of the betas’ short-run component increase in bad states of the economy. Our analysis of the risk premium highlights the importance of decomposing risk across horizons: The risk premium associated with the short-run market beta is significantly positive. This is robust to the portfolio-set choice....

  19. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  20. Latent physiological factors of complex human diseases revealed by independent component analysis of clinarrays

    Directory of Open Access Journals (Sweden)

    Chen David P

    2010-10-01

    Full Text Available Abstract Background Diagnosis and treatment of patients in the clinical setting is often driven by known symptomatic factors that distinguish one particular condition from another. Treatment based on noticeable symptoms, however, is limited to the types of clinical biomarkers collected, and is prone to overlooking dysfunctions in physiological factors not easily evident to medical practitioners. We used a vector-based representation of patient clinical biomarkers, or clinarrays, to search for latent physiological factors that underlie human diseases directly from clinical laboratory data. Knowledge of these factors could be used to improve assessment of disease severity and help to refine strategies for diagnosis and monitoring disease progression. Results Applying Independent Component Analysis on clinarrays built from patient laboratory measurements revealed both known and novel concomitant physiological factors for asthma, types 1 and 2 diabetes, cystic fibrosis, and Duchenne muscular dystrophy. Serum sodium was found to be the most significant factor for both type 1 and type 2 diabetes, and was also significant in asthma. TSH3, a measure of thyroid function, and blood urea nitrogen, indicative of kidney function, were factors unique to type 1 diabetes respective to type 2 diabetes. Platelet count was significant across all the diseases analyzed. Conclusions The results demonstrate that large-scale analyses of clinical biomarkers using unsupervised methods can offer novel insights into the pathophysiological basis of human disease, and suggest novel clinical utility of established laboratory measurements.

  1. Components of WWER engineering factors for peaking factors: status and trends

    International Nuclear Information System (INIS)

    Tsyganov, S.V.

    2010-01-01

    One of the topics for discussion at special working group 'Elaboration of the methodology for calculating the core design engineering factors' is the problem of engineering factor components. The list of components corresponds to the phenomena that are taken into account with the engineering factor. It is supposed the better understanding of the influenced phenomena is important stage for developing unified methodology. This paper presents some brief overview of components of the engineering factor for VVER core peaking factors as they are in the Kurchatov Institute methodology. The evolution of some components to less conservative values is observed. Author makes some assumptions as for the further progress in components assessment. The engineering factors providing observance of design limits at normal operation, should cover, with the set probability, the uncertainty, connected with process of core design. For definition of the value of factors it is necessary to define influence of these uncertainties on the investigated parameter of the reactor. Practice consists in defining all possible sources of uncertainties, to estimate influence of each of them, and on their basis to define total influence of all uncertainties. The important stage of a technique of factor calculation is a definition of the list influencing uncertainties. It is obvious that all characteristics of VVER core are known with some uncertainty-owing to manufacturing tolerances, the measurement errors, etc. However essential influence on the parameters connected with safety, render only a part from them. At list formation those characteristics get out only, whose influence is essential to the corresponding parameter. (Author)

  2. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  3. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  4. Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis

    Directory of Open Access Journals (Sweden)

    Tarek eLajnef

    2015-07-01

    Full Text Available A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG signals are split into oscillatory (spindles and transient (K-complex components. This decomposition is conveniently achieved by applying morphological component analysis (MCA to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT. Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i the transient component to reveal K-complexes and (ii the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity versus FDR on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1, the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62% and 49.09% respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are

  5. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  6. Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data.

    Science.gov (United States)

    Medina, José M; Díaz, José A; Vukusic, Pete

    2015-04-20

    Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.

  7. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  8. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  9. Investigation of effective factors of transient thermal stress of the MONJU-System components

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Masaaki; Hirayama, Hiroshi; Kimura, Kimitaka; Jinbo, M. [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1999-03-01

    Transient thermal stress of each system Component in the fast breeder reactor is an uncertain factor on it's structural design. The temperature distribution in a system component changes over a wide range in time and in space. An unified evaluation technique of thermal, hydraulic, and structural analysis, in which includes thermal striping, temperature stratification, transient thermal stress and the integrity of the system components, is required for the optimum design of tho fast reactor plant. Thermal boundary conditions should be set up by both the transient thermal stress analysis and the structural integrity evaluation of each system component. The reasonable thermal boundary conditions for the design of the MONJU and a demonstration fast reactor, are investigated. The temperature distribution analysis models and the thermal boundary conditions on the Y-piece structural parts of each system component, such as reactor vessel, intermediate heat exchanger, primary main circulation pump, steam generator, superheater and upper structure of reactor core, are illustrated in the report. (M. Suetake)

  10. Analysis of factors controlling soil phosphorus loss with surface runoff in Huihe National Nature Reserve by principal component and path analysis methods.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Bu, He; Wo, Qiang

    2018-01-01

    Phosphorus (P) loss with surface runoff accounts for the P input to and acceleration of eutrophication of the freshwater. Many studies have focused on factors affecting P loss with surface runoff from soils, but rarely on the relationship among these factors. In the present study, rainfall simulation on P loss with surface runoff was conducted in Huihe National Nature Reserve, in Hulunbeier grassland, China, and the relationships between P loss with surface runoff, soil properties, and rainfall conditions were examined. Principal component analysis and path analysis were used to analyze the direct and indirect effects on P loss with surface runoff. The results showed that P loss with surface runoff was closely correlated with soil electrical conductivity, soil pH, soil Olsen P, soil total nitrogen (TN), soil total phosphorus (TP), and soil organic carbon (SOC). The main driving factors which influenced P loss with surface runoff were soil TN, soil pH, soil Olsen P, and soil water content. Path analysis and determination coefficient analysis indicated that the standard multiple regression equation for P loss with surface runoff and each main factor was Y = 7.429 - 0.439 soil TN - 6.834 soil pH + 1.721 soil Olsen-P + 0.183 soil water content (r = 0.487, p runoff. The effect of physical and chemical properties of undisturbed soils on P loss with surface runoff was discussed, and the soil water content and soil Olsen P were strongly positive influences on the P loss with surface runoff.

  11. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  12. Common Factor Analysis Versus Principal Component Analysis: Choice for Symptom Cluster Research

    Directory of Open Access Journals (Sweden)

    Hee-Ju Kim, PhD, RN

    2008-03-01

    Conclusion: If the study purpose is to explain correlations among variables and to examine the structure of the data (this is usual for most cases in symptom cluster research, CFA provides a more accurate result. If the purpose of a study is to summarize data with a smaller number of variables, PCA is the choice. PCA can also be used as an initial step in CFA because it provides information regarding the maximum number and nature of factors. In using factor analysis for symptom cluster research, several issues need to be considered, including subjectivity of solution, sample size, symptom selection, and level of measure.

  13. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  14. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  15. Independent component analysis: recent advances

    OpenAIRE

    Hyv?rinen, Aapo

    2013-01-01

    Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...

  16. Application of principal component and factor analyses in electron spectroscopy

    International Nuclear Information System (INIS)

    Siuda, R.; Balcerowska, G.

    1998-01-01

    Fundamentals of two methods, taken from multivariate analysis and known as principal component analysis (PCA) and factor analysis (FA), are presented. Both methods are well known in chemometrics. Since 1979, when application of the methods to electron spectroscopy was reported for the first time, they became to be more and more popular in different branches of electron spectroscopy. The paper presents examples of standard applications of the method of Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), and electron energy loss spectroscopy (EELS). Advantages one can take from application of the methods, their potentialities as well as their limitations are pointed out. (author)

  17. Using network component analysis to dissect regulatory networks mediated by transcription factors in yeast.

    Directory of Open Access Journals (Sweden)

    Chun Ye

    2009-03-01

    Full Text Available Understanding the relationship between genetic variation and gene expression is a central question in genetics. With the availability of data from high-throughput technologies such as ChIP-Chip, expression, and genotyping arrays, we can begin to not only identify associations but to understand how genetic variations perturb the underlying transcription regulatory networks to induce differential gene expression. In this study, we describe a simple model of transcription regulation where the expression of a gene is completely characterized by two properties: the concentrations and promoter affinities of active transcription factors. We devise a method that extends Network Component Analysis (NCA to determine how genetic variations in the form of single nucleotide polymorphisms (SNPs perturb these two properties. Applying our method to a segregating population of Saccharomyces cerevisiae, we found statistically significant examples of trans-acting SNPs located in regulatory hotspots that perturb transcription factor concentrations and affinities for target promoters to cause global differential expression and cis-acting genetic variations that perturb the promoter affinities of transcription factors on a single gene to cause local differential expression. Although many genetic variations linked to gene expressions have been identified, it is not clear how they perturb the underlying regulatory networks that govern gene expression. Our work begins to fill this void by showing that many genetic variations affect the concentrations of active transcription factors in a cell and their affinities for target promoters. Understanding the effects of these perturbations can help us to paint a more complete picture of the complex landscape of transcription regulation. The software package implementing the algorithms discussed in this work is available as a MATLAB package upon request.

  18. Principal component analysis of socioeconomic factors and their association with malaria in children from the Ashanti Region, Ghana.

    Science.gov (United States)

    Krefis, Anne Caroline; Schwarz, Norbert Georg; Nkrumah, Bernard; Acquah, Samuel; Loag, Wibke; Sarpong, Nimako; Adu-Sarkodie, Yaw; Ranft, Ulrich; May, Jürgen

    2010-07-13

    The socioeconomic and sociodemographic situation are important components for the design and assessment of malaria control measures. In malaria endemic areas, however, valid classification of socioeconomic factors is difficult due to the lack of standardized tax and income data. The objective of this study was to quantify household socioeconomic levels using principal component analyses (PCA) to a set of indicator variables and to use a classification scheme for the multivariate analysis of children<15 years of age presented with and without malaria to an outpatient department of a rural hospital. In total, 1,496 children presenting to the hospital were examined for malaria parasites and interviewed with a standardized questionnaire. The information of eleven indicators of the family's housing situation was reduced by PCA to a socioeconomic score, which was then classified into three socioeconomic status (poor, average and rich). Their influence on the malaria occurrence was analysed together with malaria risk co-factors, such as sex, parent's educational and ethnic background, number of children living in a household, applied malaria protection measures, place of residence and age of the child and the mother. The multivariate regression analysis demonstrated that the proportion of children with malaria decreased with increasing socioeconomic status as classified by PCA (p<0.05). Other independent factors for malaria risk were the use of malaria protection measures (p<0.05), the place of residence (p<0.05), and the age of the child (p<0.05). The socioeconomic situation is significantly associated with malaria even in holoendemic rural areas where economic differences are not much pronounced. Valid classification of the socioeconomic level is crucial to be considered as confounder in intervention trials and in the planning of malaria control measures.

  19. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  20. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  1. Extrinsic Factors as Component Positions to Bone and Intrinsic Factors Affecting Postoperative Rotational Limb Alignment in Total Knee Arthroplasty.

    Science.gov (United States)

    Mochizuki, Tomoharu; Sato, Takashi; Tanifuji, Osamu; Watanabe, Satoshi; Kobayashi, Koichi; Endo, Naoto

    2018-02-13

    This study aimed to identify the factors affecting postoperative rotational limb alignment of the tibia relative to the femur. We hypothesized that not only component positions but also several intrinsic factors were associated with postoperative rotational limb alignment. This study included 99 knees (90 women and 9 men) with a mean age of 77 ± 6 years. A three-dimensional (3D) assessment system was applied under weight-bearing conditions to biplanar long-leg radiographs using 3D-to-2D image registration technique. The evaluation parameters were (1) component position; (2) preoperative and postoperative coronal, sagittal, and rotational limb alignment; (3) preoperative bony deformity, including femoral torsion, condylar twist angle, and tibial torsion; and (4) preoperative and postoperative range of motion (ROM). In multiple linear regression analysis using a stepwise procedure, postoperative rotational limb alignment was associated with the following: (1) rotation of the component position (tibia: β = 0.371, P intrinsic factors, such as preoperative rotational limb alignment, ROM, and tibial torsion, affected postoperative rotational limb alignment. On a premise of correct component positions, the intrinsic factors that can be controlled by surgeons should be taken care. In particular, ROM is necessary to be improved within the possible range to acquire better postoperative rotational limb alignment. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  3. Group-wise ANOVA simultaneous component analysis for designed omics experiments

    NARCIS (Netherlands)

    Saccenti, Edoardo; Smilde, Age K.; Camacho, José

    2018-01-01

    Introduction: Modern omics experiments pertain not only to the measurement of many variables but also follow complex experimental designs where many factors are manipulated at the same time. This data can be conveniently analyzed using multivariate tools like ANOVA-simultaneous component analysis

  4. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  5. Analysis of factors controlling sediment phosphorus flux potential of wetlands in Hulun Buir grassland by principal component and path analysis method.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Ye, Shengxing; Zheng, Zhirong

    2017-11-08

    Phosphorus (P) flux potential can predict the trend of phosphorus release from wetland sediments to water and provide scientific parameters for further monitoring and management for phosphorus flux from wetland sediments to overlying water. Many studies have focused on factors affecting sediment P flux potential in sediment-water interface, but rarely on the relationship among these factors. In the present study, experiment on sediment P flux potential in sediment-water interface was conducted in six wetlands in Hulun Buir grassland, China and the relationships among sediment P flux potential in sediment-water interface, sediment physical properties, and sediment chemical characteristics were examined. Principal component analysis and path analysis were used to discuss these data in correlation coefficient, direct, and indirect effects on sediment P flux potential in sediment-water interface. Results indicated that the major factors affecting sediment P flux potential in sediment-water interface were amount of organophosphate-degradation bacterium in sediment, Ca-P content, and total phosphorus concentrations. The factors of direct influence sediment P flux potential were sediment Ca-P content, Olsen-P content, SOC content, and sediment Al-P content. The indirect influence sediment P flux potential in sediment-water interface was sediment Olsen-P content, sediment SOC content, sediment Ca-P content, and sediment Al-P content. And the standard multiple regression describing the relationship between sediment P flux potential in sediment-water interface and its major effect factors was Y = 5.849 - 1.025X 1  - 1.995X 2  + 0.188X 3  - 0.282X 4 (r = 0.9298, p < 0.01, n = 96), where Y is sediment P flux potential in sediment-water interface, X 1 is sediment Ca-P content, X 2 is sediment Olsen-P content, X 3 is sediment SOC content, and X 4 is sediment Al-P content. Therefore, future research will focus on these sediment properties to analyze the

  6. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  7. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  8. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  9. A component analysis of positive behaviour support plans.

    Science.gov (United States)

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  10. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  11. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  12. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  13. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  14. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  15. Factor Structure of Indices of the Second Derivative of the Finger Photoplethysmogram with Metabolic Components and Other Cardiovascular Risk Indicators

    Directory of Open Access Journals (Sweden)

    Tomoyuki Kawada

    2013-02-01

    Full Text Available BackgroundThe second derivative of the finger photoplethysmogram (SDPTG is an indicator of arterial stiffness. The present study was conducted to clarify the factor structure of indices of the SDPTG in combination with components of the metabolic syndrome (MetS, to elucidate the significance of the SDPTG among various cardiovascular risk factors.MethodsThe SDPTG was determined in the second forefinger of the left hand in 1,055 male workers (mean age, 44.2±6.4 years. Among 4 waves of SDPTG components, the ratios of the height of the "a" wave to that of the "b" and "d" waves were expressed as b/a and d/a, and used as SDPTG indices for the analysis.ResultsPrincipal axis factoring analysis was conducted using age, SDPTG indices, components of MetS, and the serum levels of C-reactive protein (CRP and uric acid. Three factors were extracted, and the SDPTG indices were categorized in combination with age as the third factor. Metabolic components and the SDPTG indices were independently categorized. These three factors explained 44.4% of the total variation. Multiple logistic regression analysis revealed age, d/a, serum uric acid, serum CRP, and regular exercise as independent determinants of the risk of MetS. The odds ratios (95% confidence intervals were 1.08 (1.04 to 1.11, 0.10 (0.01 to 0.73, 1.24 (1.06 to 1.44, 3.59 (2.37 to 5.42, and 0.48 (0.28 to 0.82, respectively.ConclusionThe SDPTG indices were categorized in combination with age, and they differed in characteristics from components of MetS or inflammatory markers. In addition, this cross-sectional study also revealed decrease of the d/a as a risk factor for the development of MetS.

  16. Application of principal component analysis to ecodiversity assessment of postglacial landscape (on the example of Debnica Kaszubska commune, Middle Pomerania)

    Science.gov (United States)

    Wojciechowski, Adam

    2017-04-01

    In order to assess ecodiversity understood as a comprehensive natural landscape factor (Jedicke 2001), it is necessary to apply research methods which recognize the environment in a holistic way. Principal component analysis may be considered as one of such methods as it allows to distinguish the main factors determining landscape diversity on the one hand, and enables to discover regularities shaping the relationships between various elements of the environment under study on the other hand. The procedure adopted to assess ecodiversity with the use of principal component analysis involves: a) determining and selecting appropriate factors of the assessed environment qualities (hypsometric, geological, hydrographic, plant, and others); b) calculating the absolute value of individual qualities for the basic areas under analysis (e.g. river length, forest area, altitude differences, etc.); c) principal components analysis and obtaining factor maps (maps of selected components); d) generating a resultant, detailed map and isolating several classes of ecodiversity. An assessment of ecodiversity with the use of principal component analysis was conducted in the test area of 299,67 km2 in Debnica Kaszubska commune. The whole commune is situated in the Weichselian glaciation area of high hypsometric and morphological diversity as well as high geo- and biodiversity. The analysis was based on topographical maps of the commune area in scale 1:25000 and maps of forest habitats. Consequently, nine factors reflecting basic environment elements were calculated: maximum height (m), minimum height (m), average height (m), the length of watercourses (km), the area of water reservoirs (m2), total forest area (ha), coniferous forests habitats area (ha), deciduous forest habitats area (ha), alder habitats area (ha). The values for individual factors were analysed for 358 grid cells of 1 km2. Based on the principal components analysis, four major factors affecting commune ecodiversity

  17. 7 CFR 1000.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing... advanced pricing factors. Class prices per hundredweight of milk containing 3.5 percent butterfat, component prices, and advanced pricing factors shall be as follows. The prices and pricing factors described...

  18. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  19. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  20. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  1. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  2. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  3. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  4. Factor analysis of the Hamilton Depression Rating Scale in Parkinson's disease.

    Science.gov (United States)

    Broen, M P G; Moonen, A J H; Kuijf, M L; Dujardin, K; Marsh, L; Richard, I H; Starkstein, S E; Martinez-Martin, P; Leentjens, A F G

    2015-02-01

    Several studies have validated the Hamilton Depression Rating Scale (HAMD) in patients with Parkinson's disease (PD), and reported adequate reliability and construct validity. However, the factorial validity of the HAMD has not yet been investigated. The aim of our analysis was to explore the factor structure of the HAMD in a large sample of PD patients. A principal component analysis of the 17-item HAMD was performed on data of 341 PD patients, available from a previous cross sectional study on anxiety. An eigenvalue ≥1 was used to determine the number of factors. Factor loadings ≥0.4 in combination with oblique rotations were used to identify which variables made up the factors. Kaiser-Meyer-Olkin measure (KMO), Cronbach's alpha, Bartlett's test, communality, percentage of non-redundant residuals and the component correlation matrix were computed to assess factor validity. KMO verified the sample's adequacy for factor analysis and Cronbach's alpha indicated a good internal consistency of the total scale. Six factors had eigenvalues ≥1 and together explained 59.19% of the variance. The number of items per factor varied from 1 to 6. Inter-item correlations within each component were low. There was a high percentage of non-redundant residuals and low communality. This analysis demonstrates that the factorial validity of the HAMD in PD is unsatisfactory. This implies that the scale is not appropriate for studying specific symptom domains of depression based on factorial structure in a PD population. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  6. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  7. Principal component analysis reveals gender-specific predictors of cardiometabolic risk in 6th graders

    Directory of Open Access Journals (Sweden)

    Peterson Mark D

    2012-11-01

    Full Text Available Abstract Background The purpose of this study was to determine the sex-specific pattern of pediatric cardiometabolic risk with principal component analysis, using several biological, behavioral and parental variables in a large cohort (n = 2866 of 6th grade students. Methods Cardiometabolic risk components included waist circumference, fasting glucose, blood pressure, plasma triglycerides levels and HDL-cholesterol. Principal components analysis was used to determine the pattern of risk clustering and to derive a continuous aggregate score (MetScore. Stratified risk components and MetScore were analyzed for association with age, body mass index (BMI, cardiorespiratory fitness (CRF, physical activity (PA, and parental factors. Results In both boys and girls, BMI and CRF were associated with multiple risk components, and overall MetScore. Maternal smoking was associated with multiple risk components in girls and boys, as well as MetScore in boys, even after controlling for children’s BMI. Paternal family history of early cardiovascular disease (CVD and parental age were associated with increased blood pressure and MetScore for girls. Children’s PA levels, maternal history of early CVD, and paternal BMI were also indicative for various risk components, but not MetScore. Conclusions Several biological and behavioral factors were independently associated with children’s cardiometabolic disease risk, and thus represent a unique gender-specific risk profile. These data serve to bolster the independent contribution of CRF, PA, and family-oriented healthy lifestyles for improving children’s health.

  8. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  9. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  10. A factor analysis to find critical success factors in retail brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in city of Tehran, Iran. Using a sample of 265 people from regular customers, the study uses factor analysis and extracts four main factors including related brand, product benefits, customer welfare strategy and corporate profits using the existing 31 factors in the literature.

  11. Reliability analysis and component functional allocations for the ESF multi-loop controller design

    International Nuclear Information System (INIS)

    Hur, Seop; Kim, D.H.; Choi, J.K.; Park, J.C.; Seong, S.H.; Lee, D.Y.

    2006-01-01

    This paper deals with the reliability analysis and component functional allocations to ensure the enhanced system reliability and availability. In the Engineered Safety Features, functionally dependent components are controlled by a multi-loop controller. The system reliability of the Engineered Safety Features-Component Control System, especially, the multi-loop controller which is changed comparing to the conventional controllers is an important factor for the Probability Safety Assessment in the nuclear field. To evaluate the multi-loop controller's failure rate of the k-out-of-m redundant system, the binomial process is used. In addition, the component functional allocation is performed to tolerate a single multi-loop controller failure without the loss of vital operation within the constraints of the piping and component configuration, and ensure that mechanically redundant components remain functional. (author)

  12. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  13. Insight into the heterogeneous adsorption of humic acid fluorescent components on multi-walled carbon nanotubes by excitation-emission matrix and parallel factor analysis.

    Science.gov (United States)

    Yang, Chenghu; Liu, Yangzhi; Cen, Qiulin; Zhu, Yaxian; Zhang, Yong

    2018-02-01

    The heterogeneous adsorption behavior of commercial humic acid (HA) on pristine and functionalized multi-walled carbon nanotubes (MWCNTs) was investigated by fluorescence excitation-emission matrix and parallel factor (EEM- PARAFAC) analysis. The kinetics, isotherms, thermodynamics and mechanisms of adsorption of HA fluorescent components onto MWCNTs were the focus of the present study. Three humic-like fluorescent components were distinguished, including one carboxylic-like fluorophore C1 (λ ex /λ em = (250, 310) nm/428nm), and two phenolic-like fluorophores, C2 (λ ex /λ em = (300, 460) nm/552nm) and C3 (λ ex /λ em = (270, 375) nm/520nm). The Lagergren pseudo-second-order model can be used to describe the adsorption kinetics of the HA fluorescent components. In addition, both the Freundlich and Langmuir models can be suitably employed to describe the adsorption of the HA fluorescent components onto MWCNTs with significantly high correlation coefficients (R 2 > 0.94, Padsorption affinity (K d ) and nonlinear adsorption degree from the HA fluorescent components to MWCNTs was clearly observed. The adsorption mechanism suggested that the π-π electron donor-acceptor (EDA) interaction played an important role in the interaction between HA fluorescent components and the three MWCNTs. Furthermore, the values of the thermodynamic parameters, including the Gibbs free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°), showed that the adsorption of the HA fluorescent components on MWCNTs was spontaneous and exothermic. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Multichannel Signals Reconstruction Based on Tunable Q-Factor Wavelet Transform-Morphological Component Analysis and Sparse Bayesian Iteration for Rotating Machines

    Directory of Open Access Journals (Sweden)

    Qing Li

    2018-04-01

    Full Text Available High-speed remote transmission and large-capacity data storage are difficult issues in signals acquisition of rotating machines condition monitoring. To address these concerns, a novel multichannel signals reconstruction approach based on tunable Q-factor wavelet transform-morphological component analysis (TQWT-MCA and sparse Bayesian iteration algorithm combined with step-impulse dictionary is proposed under the frame of compressed sensing (CS. To begin with, to prevent the periodical impulses loss and effectively separate periodical impulses from the external noise and additive interference components, the TQWT-MCA method is introduced to divide the raw vibration signal into low-resonance component (LRC, i.e., periodical impulses and high-resonance component (HRC, thus, the periodical impulses are preserved effectively. Then, according to the amplitude range of generated LRC, the step-impulse dictionary atom is designed to match the physical structure of periodical impulses. Furthermore, the periodical impulses and HRC are reconstructed by the sparse Bayesian iteration combined with step-impulse dictionary, respectively, finally, the final reconstructed raw signals are obtained by adding the LRC and HRC, meanwhile, the fidelity of the final reconstructed signals is tested by the envelop spectrum and error analysis, respectively. In this work, the proposed algorithm is applied to simulated signal and engineering multichannel signals of a gearbox with multiple faults. Experimental results demonstrate that the proposed approach significantly improves the reconstructive accuracy compared with the state-of-the-art methods such as non-convex Lq (q = 0.5 regularization, spatiotemporal sparse Bayesian learning (SSBL and L1-norm, etc. Additionally, the processing time, i.e., speed of storage and transmission has increased dramatically, more importantly, the fault characteristics of the gearbox with multiple faults are detected and saved, i.e., the

  15. Physics Metacognition Inventory Part II: Confirmatory factor analysis and Rasch analysis

    Science.gov (United States)

    Taasoobshirazi, Gita; Bailey, MarLynn; Farley, John

    2015-11-01

    The Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. In one of our earlier studies, an exploratory factor analysis provided evidence of preliminary construct validity, revealing six components of students' metacognition when solving physics problems including knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. The college students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. However, the results of the exploratory factor analysis indicated that the questionnaire could be revised to improve its construct validity. The goal of this study was to revise the questionnaire and establish its construct validity through a confirmatory factor analysis. In addition, a Rasch analysis was applied to the data to better understand the psychometric properties of the inventory and to further evaluate the construct validity. Results indicated that the final, revised inventory is a valid, reliable, and efficient tool for assessing student metacognition for physics problem solving.

  16. Research on Air Quality Evaluation based on Principal Component Analysis

    Science.gov (United States)

    Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan

    2018-01-01

    Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.

  17. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  18. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  19. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    Science.gov (United States)

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  20. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  1. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  2. Experimental comparison between total calibration factors and components calibration factors of reference dosemeters used in secondary standard laboratory dosemeters

    International Nuclear Information System (INIS)

    Silva, T.A. da.

    1981-06-01

    A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt

  3. A two-component generalized extreme value distribution for precipitation frequency analysis

    Czech Academy of Sciences Publication Activity Database

    Rulfová, Zuzana; Buishand, A.; Roth, M.; Kyselý, Jan

    2016-01-01

    Roč. 534, March (2016), s. 659-668 ISSN 0022-1694 R&D Projects: GA ČR(CZ) GA14-18675S Institutional support: RVO:68378289 Keywords : precipitation extremes * two-component extreme value distribution * regional frequency analysis * convective precipitation * stratiform precipitation * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.483, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022169416000500

  4. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  5. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  6. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  7. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  8. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  9. Thermal Analysis of Fermilab Mu2e Beamstop and Structural Analysis of Beamline Components

    Energy Technology Data Exchange (ETDEWEB)

    Narug, Colin S. [Northern Illinois U.

    2018-01-01

    The Mu2e project at Fermilab National Accelerator Laboratory aims to observe the unique conversion of muons to electrons. The success or failure of the experiment to observe this conversion will further the understanding of the standard model of physics. Using the particle accelerator, protons will be accelerated and sent to the Mu2e experiment, which will separate the muons from the beam. The muons will then be observed to determine their momentum and the particle interactions occur. At the end of the Detector Solenoid, the internal components will need to absorb the remaining particles of the experiment using polymer absorbers. Because the internal structure of the beamline is in a vacuum, the heat transfer mechanisms that can disperse the energy generated by the particle absorption is limited to conduction and radiation. To determine the extent that the absorbers will heat up over one year of operation, a transient thermal finite element analysis has been performed on the Muon Beam Stop. The levels of energy absorption were adjusted to determine the thermal limit for the current design. Structural finite element analysis has also been performed to determine the safety factors of the Axial Coupler, which connect and move segments of the beamline. The safety factor of the trunnion of the Instrument Feed Through Bulk Head has also been determined for when it is supporting the Muon Beam Stop. The results of the analysis further refine the design of the beamline components prior to testing, fabrication, and installation.

  10. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    International Nuclear Information System (INIS)

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  11. Analysis of Factors Influencing Labour Supplied to Non-Farm Sub ...

    African Journals Online (AJOL)

    acer

    regression analysis reveal that educational level had negative coefficient, while occupation had positive coefficient ... component of the rural economy, its role in ... economic factors influencing labour ... Textbooks, Government publications,.

  12. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  13. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  14. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  15. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  16. Analysis of tangible and intangible hotel service quality components

    Directory of Open Access Journals (Sweden)

    Marić Dražen

    2016-01-01

    Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.

  17. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  18. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  19. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  20. Reliability Analysis of Fatigue Failure of Cast Components for Wind Turbines

    Directory of Open Access Journals (Sweden)

    Hesam Mirzaei Rafsanjani

    2015-04-01

    Full Text Available Fatigue failure is one of the main failure modes for wind turbine drivetrain components made of cast iron. The wind turbine drivetrain consists of a variety of heavily loaded components, like the main shaft, the main bearings, the gearbox and the generator. The failure of each component will lead to substantial economic losses such as cost of lost energy production and cost of repairs. During the design lifetime, the drivetrain components are exposed to variable loads from winds and waves and other sources of loads that are uncertain and have to be modeled as stochastic variables. The types of loads are different for offshore and onshore wind turbines. Moreover, uncertainties about the fatigue strength play an important role in modeling and assessment of the reliability of the components. In this paper, a generic stochastic model for fatigue failure of cast iron components based on fatigue test data and a limit state equation for fatigue failure based on the SN-curve approach and Miner’s rule is presented. The statistical analysis of the fatigue data is performed using the Maximum Likelihood Method which also gives an estimate of the statistical uncertainties. Finally, illustrative examples are presented with reliability analyses depending on various stochastic models and partial safety factors.

  1. Proof of fatigue strength of nuclear components part II: Numerical fatigue analysis for transient stratification loading considering environmental effects

    International Nuclear Information System (INIS)

    Krätschmer, D.; Roos, E.; Schuler, X.; Herter, K.-H.

    2012-01-01

    For the construction, design and operation of nuclear components and systems the appropriate technical codes and standards provide detailed analysis procedures which guarantee a reliable behaviour of the structural components throughout the specified lifetime. Especially for cyclic stress evaluation the different codes and standards provide different fatigue analyses procedures to be performed considering the various mechanical and thermal loading histories and geometric complexities of the components. To consider effects of light water reactor coolant environments, new design curves included in report NUREG/CR-6909 for austenitic stainless steels and for low alloy steels have been presented. For the usage of these new design curves an environmental fatigue correction factor for incorporating environmental effects has to be calculated and used. The application of this environmental correction factor to a fatigue analysis of a nozzle with transient stratification loads, derived by in-service monitoring, has been performed. The results are used to compare with calculated usage factors, based on design curves without taking environmental effects particularly into account. - Highlights: ► We model an nozzle for fatigue analysis und mechanical and thermal loading conditions. ► A simplified as well as a general elastic–plastic fatigue analysis considering environmental effects is performed. ► The influence of different factors calculating the environmental factor F en are shown. ► The presented numerical evaluation methodology allows the consideration of all relevant parameters to assess lifetime.

  2. Problems of stress analysis of fuelling machine head components

    International Nuclear Information System (INIS)

    Mathur, D.D.

    1975-01-01

    The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)

  3. 7 CFR 1131.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1131.53 Section 1131.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  4. 7 CFR 1005.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1005.53 Section 1005.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  5. 7 CFR 1126.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1126.53 Section 1126.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  6. 7 CFR 1032.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1032.53 Section 1032.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  7. 7 CFR 1030.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1030.53 Section 1030.53 Agriculture Regulations of the Department of Agriculture... of class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  8. 7 CFR 1033.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1033.53 Section 1033.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  9. 7 CFR 1001.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1001.53 Section 1001.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  10. 7 CFR 1007.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1007.53 Section 1007.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  11. 7 CFR 1006.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1006.53 Section 1006.53 Agriculture Regulations of the Department of Agriculture... class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  12. The integration of expert-defined importance factors to enrich Bayesian Fault Tree Analysis

    International Nuclear Information System (INIS)

    Darwish, Molham; Almouahed, Shaban; Lamotte, Florent de

    2017-01-01

    This paper proposes an analysis of a hybrid Bayesian-Importance model for system designers to improve the quality of services related to Active Assisted Living Systems. The proposed model is based on two factors: failure probability measure of different service components and, an expert defined degree of importance that each component holds for the success of the corresponding service. The proposed approach advocates the integration of expert-defined importance factors to enrich the Bayesian Fault Tree Analysis (FTA) approach. The evaluation of the proposed approach is conducted using the Fault Tree Analysis formalism where the undesired state of a system is analyzed using Boolean logic mechanisms to combine a series of lower-level events.

  13. The Socioeconomic Factors and the Indigenous Component of Tuberculosis in Amazonas

    Science.gov (United States)

    2016-01-01

    Despite the availability of tuberculosis prevention and control services throughout Amazonas, high rates of morbidity and mortality from tuberculosis remain in the region. Knowledge of the social determinants of tuberculosis in Amazonas is important for the establishment of public policies and the planning of effective preventive and control measures for the disease. To analyze the relationship of the spatial distribution of the incidence of tuberculosis in municipalities and regions of Amazonas to the socioeconomic factors and indigenous tuberculosis component, from 2007 to 2013. An ecological study was conducted based on secondary data from the epidemiological surveillance of tuberculosis. A linear regression model was used to analyze the relationship of the annual incidence of tuberculosis to the socioeconomic factors, performance indicators of health services, and indigenous tuberculosis component. The distribution of the incidence of tuberculosis in the municipalities of Amazonas was positively associated with the Gini index and the population attributable fraction of tuberculosis in the indigenous peoples, but negatively associated with the proportion of the poor and the unemployment rate. The spatial distribution of tuberculosis in the different regions of Amazonas was heterogeneous and closely related with the socioeconomic factors and indigenous component of tuberculosis. PMID:27362428

  14. 7 CFR 1124.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Announcement of class prices, component prices, and advanced pricing factors. 1124.53 Section 1124.53 Agriculture Regulations of the Department of Agriculture... Announcement of class prices, component prices, and advanced pricing factors. See § 1000.53. ...

  15. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  16. Item-level factor analysis of the Self-Efficacy Scale.

    Science.gov (United States)

    Bunketorp Käll, Lina

    2014-03-01

    This study explores the internal structure of the Self-Efficacy Scale (SES) using item response analysis. The SES was previously translated into Swedish and modified to encompass all types of pain, not exclusively back pain. Data on perceived self-efficacy in 47 patients with subacute whiplash-associated disorders were derived from a previously conducted randomized-controlled trial. The item-level factor analysis was carried out using a six-step procedure. To further study the item inter-relationships and to determine the underlying structure empirically, the 20 items of the SES were also subjected to principal component analysis with varimax rotation. The analyses showed two underlying factors, named 'social activities' and 'physical activities', with seven items loading on each factor. The remaining six items of the SES appeared to measure somewhat different constructs and need to be analysed further.

  17. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  18. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  19. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  20. What Governs Lorentz Factors of Jet Components in Blazars?

    Indian Academy of Sciences (India)

    We use a sample of radio-loud Active Galactic Nuclei (AGNs) with measured black hole masses to explore the jet formation mechanisms in these sources. We find a significant correlation between black hole mass and the bulk Lorentz factor of the jet components for this sample, while no significant correlation is present ...

  1. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  2. Non-negative factor analysis supporting the interpretation of elemental distribution images acquired by XRF

    International Nuclear Information System (INIS)

    Alfeld, Matthias; Falkenberg, Gerald; Wahabzada, Mirwaes; Bauckhage, Christian; Kersting, Kristian; Wellenreuther, Gerd

    2014-01-01

    Stacks of elemental distribution images acquired by XRF can be difficult to interpret, if they contain high degrees of redundancy and components differing in their quantitative but not qualitative elemental composition. Factor analysis, mainly in the form of Principal Component Analysis (PCA), has been used to reduce the level of redundancy and highlight correlations. PCA, however, does not yield physically meaningful representations as they often contain negative values. This limitation can be overcome, by employing factor analysis that is restricted to non-negativity. In this paper we present the first application of the Python Matrix Factorization Module (pymf) on XRF data. This is done in a case study on the painting Saul and David from the studio of Rembrandt van Rijn. We show how the discrimination between two different Co containing compounds with minimum user intervention and a priori knowledge is supported by Non-Negative Matrix Factorization (NMF).

  3. Using containment analysis to improve component cooling water heat exchanger limits

    International Nuclear Information System (INIS)

    Da Silva, H.C.; Tajbakhsh, A.

    1995-01-01

    The Comanche Peak Steam Electric Station design requires that exit temperatures from the Component Cooling Water Heat Exchanger remain below 330.37 K during the Emergency Core Cooling System recirculation stage, following a hypothetical Loss of Coolant Accident (LOCA). Due to measurements indicating a higher than expected combination of: (a) high fouling factor in the Component Cooling Water Heat Exchanger with (b) high ultimate heat sink temperatures, that might lead to temperatures in excess of the 330.37 K limit, if a LOCA were to occur, TUElectric adjusted key flow rates in the Component Cooling Water network. This solution could only be implemented with improvements to the containment analysis methodology of record. The new method builds upon the CONTEMPT-LT/028 code by: (a) coupling the long term post-LOCA thermohydraulics with a more detailed analytical model for the complex Component Cooling Water Heat Exchanger network and (b) changing the way mass and energy releases are calculated after core reflood and steam generator energy is dumped to the containment. In addition, a simple code to calculate normal cooldowns was developed to confirm RHR design bases were met with the improved limits

  4. Using the Cluster Analysis and the Principal Component Analysis in Evaluating the Quality of a Destination

    Directory of Open Access Journals (Sweden)

    Ida Vajčnerová

    2016-01-01

    Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.

  5. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  6. Dynamic Modal Analysis of Vertical Machining Centre Components

    OpenAIRE

    Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh

    2009-01-01

    The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...

  7. 7 CFR 1033.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1033.50 Section 1033.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  8. 7 CFR 1005.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1005.50 Section 1005.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  9. 7 CFR 1001.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1001.50 Section 1001.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  10. 7 CFR 1006.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1006.50 Section 1006.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  11. 7 CFR 1126.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1126.50 Section 1126.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  12. 7 CFR 1032.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1032.50 Section 1032.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  13. 7 CFR 1131.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1131.50 Section 1131.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  14. 7 CFR 1007.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1007.50 Section 1007.50 Agriculture Regulations of the Department of Agriculture (Continued..., and advanced pricing factors. See § 1000.50. ...

  15. Application of factor analysis to the hydrogeochemical study of a coastal aquifer

    OpenAIRE

    Ruiz Beviá, Francisco; Gomis Yagües, Vicente; Blasco Alemany, Pilar

    1989-01-01

    The use of numerical values for the chemical components of waters from an aquifer as input data for factor analysis is shown to be sometimes more convenient than the use of the logarithms of these figures. Factor analysis was applied to the hydrogeochemical study of a coastal aquifer located in Javea, Alicante (Spain). A set of factors was found which explained the source of the ions in the water and even certain chemical processes which accompany the intrusion of seawater, such as the strong...

  16. 7 CFR 1124.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1124.50 Section 1124.50 Agriculture Regulations of the Department of Agriculture (Continued... prices, and advanced pricing factors. See § 1000.50. ...

  17. 7 CFR 1030.50 - Class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Class prices, component prices, and advanced pricing factors. 1030.50 Section 1030.50 Agriculture Regulations of the Department of Agriculture (Continued... prices, and advanced pricing factors. See § 1000.50. ...

  18. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  19. Mapping ash properties using principal components analysis

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  20. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  1. Confirmatory Factor Analysis of the Delirium Rating Scale Revised-98 (DRS-R98).

    Science.gov (United States)

    Thurber, Steven; Kishi, Yasuhiro; Trzepacz, Paula T; Franco, Jose G; Meagher, David J; Lee, Yanghyun; Kim, Jeong-Lan; Furlanetto, Leticia M; Negreiros, Daniel; Huang, Ming-Chyi; Chen, Chun-Hsin; Kean, Jacob; Leonard, Maeve

    2015-01-01

    Principal components analysis applied to the Delirium Rating Scale-Revised-98 contributes to understanding the delirium construct. Using a multisite pooled international delirium database, the authors applied confirmatory factor analysis to Delirium Rating Scale-Revised-98 scores from 859 adult patients evaluated by delirium experts (delirium, N=516; nondelirium, N=343). Confirmatory factor analysis found all diagnostic features and core symptoms (cognitive, language, thought process, sleep-wake cycle, motor retardation), except motor agitation, loaded onto factor 1. Motor agitation loaded onto factor 2 with noncore symptoms (delusions, affective lability, and perceptual disturbances). Factor 1 loading supports delirium as a single construct, but when accompanied by psychosis, motor agitation's role may not be solely as a circadian activity indicator.

  2. Multi-component separation and analysis of bat echolocation calls.

    Science.gov (United States)

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  3. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  4. Analysis methods for structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Sievers, J.

    2004-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  5. COMPARING INDEPENDENT COMPONENT ANALYSIS WITH PRINCIPLE COMPONENT ANALYSIS IN DETECTING ALTERATIONS OF PORPHYRY COPPER DEPOSIT (CASE STUDY: ARDESTAN AREA, CENTRAL IRAN

    Directory of Open Access Journals (Sweden)

    S. Mahmoudishadi

    2017-09-01

    Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  6. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    Science.gov (United States)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  7. System diagnostics using qualitative analysis and component functional classification

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures

  8. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  9. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  10. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  11. Study of Seasonal Variation in Groundwater Quality of Sagar City (India by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Hemant Pathak

    2011-01-01

    Full Text Available Groundwater is one of the major resources of the drinking water in Sagar city (India.. In this study 15 sampling station were selected for the investigations on 14 chemical parameters. The work was carried out during different months of the pre-monsoon, monsoon and post-monsoon seasons in June 2009 to June 2010. The multivariate statistics such as principal component and cluster analysis were applied to the datasets to investigate seasonal variations in groundwater quality. Principal axis factoring has been used to observe the mode of association of parameters and their interrelationships, for evaluating water quality. Average value of BOD, COD, ammonia and iron was high during entire study period. Elevated values of BOD and ammonia in monsoon, slightly more value of BOD in post-monsoon, BOD, ammonia and iron in pre-monsoon period reflected contribution on temporal effect on groundwater. Results of principal component analysis evinced that all the parameters equally and significantly contribute to groundwater quality variations. Factor 1 and factor 2 analysis revealed the DO value deteriorate due to organic load (BOD/Ammonia in different seasons. Hierarchical cluster analysis grouped 15 stations into four clusters in monsoon, five clusters in post-monsoon and five clusters in pre-monsoon with similar water quality features. Clustered group at monsoon, post-monsoon and pre-monsoon consisted one station exhibiting significant spatial variation in physicochemical composition. The anthropogenic nitrogenous species, as fallout from modernization activities. The study indicated that the groundwater sufficiently well oxygenated and nutrient-rich in study places.

  12. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  13. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    HRISTIAN Liliana

    2017-05-01

    Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.

  14. Experimental investigation of the factors influencing the polymer-polymer bond strength during two-component injection moulding

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Bondo, Martin

    2010-01-01

    Two-component injection moulding is a commercially important manufacturing process and a key technology for combining different material properties in a single plastic product. It is also one of most industrially adaptive process chain for manufacturing so-called moulded interconnect devices (MIDs......). Many fascinating applications of two-component or multi-component polymer parts are restricted due to the weak interfacial adhesion of the polymers. A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi-component polymer processing. This paper...... investigates the effects of the process conditions and geometrical factors on the bond strength of two-component polymer parts and identifies the factors which can effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength are also investigated...

  15. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  16. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  17. Principal component analysis for predicting transcription-factor binding motifs from array-derived data

    Directory of Open Access Journals (Sweden)

    Vincenti Matthew P

    2005-11-01

    Full Text Available Abstract Background The responses to interleukin 1 (IL-1 in human chondrocytes constitute a complex regulatory mechanism, where multiple transcription factors interact combinatorially to transcription-factor binding motifs (TFBMs. In order to select a critical set of TFBMs from genomic DNA information and an array-derived data, an efficient algorithm to solve a combinatorial optimization problem is required. Although computational approaches based on evolutionary algorithms are commonly employed, an analytical algorithm would be useful to predict TFBMs at nearly no computational cost and evaluate varying modelling conditions. Singular value decomposition (SVD is a powerful method to derive primary components of a given matrix. Applying SVD to a promoter matrix defined from regulatory DNA sequences, we derived a novel method to predict the critical set of TFBMs. Results The promoter matrix was defined to establish a quantitative relationship between the IL-1-driven mRNA alteration and genomic DNA sequences of the IL-1 responsive genes. The matrix was decomposed with SVD, and the effects of 8 potential TFBMs (5'-CAGGC-3', 5'-CGCCC-3', 5'-CCGCC-3', 5'-ATGGG-3', 5'-GGGAA-3', 5'-CGTCC-3', 5'-AAAGG-3', and 5'-ACCCA-3' were predicted from a pool of 512 random DNA sequences. The prediction included matches to the core binding motifs of biologically known TFBMs such as AP2, SP1, EGR1, KROX, GC-BOX, ABI4, ETF, E2F, SRF, STAT, IK-1, PPARγ, STAF, ROAZ, and NFκB, and their significance was evaluated numerically using Monte Carlo simulation and genetic algorithm. Conclusion The described SVD-based prediction is an analytical method to provide a set of potential TFBMs involved in transcriptional regulation. The results would be useful to evaluate analytically a contribution of individual DNA sequences.

  18. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  19. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  20. Validity of a four-factor modelunderlying the physical fitness in adults with intellectual disabilities a confirmatory factor analysis

    OpenAIRE

    Cuesta-Vargas, Antonio; Solera Martinez, M; Rodriguez Moya, Alejandro; Perez, Y; Martinez Vizcaino, V

    2011-01-01

    Purpose: To use confirmatory factor analysis to test whether a four factor might explain the clustering of the components of the physical fitness in adults with intellectual disabilities (FID). Relevance: Individuals with intellectual disabilities (ID) are significantly weaker than individuals without ID at all stages of life. These subjects might be particularly susceptible to loss of basic function because of poor physical fitness. Participants: We studied 267 adults with intellectual...

  1. Detailed Structural Analysis of Critical Wendelstein 7-X Magnet System Components

    International Nuclear Information System (INIS)

    Egorov, K.

    2006-01-01

    The Wendelstein 7-X (W7-X) stellarator experiment is presently under construction and assembly in Greifswald, Germany. The goal of the experiment is to verify that the stellarator magnetic confinement concept is a viable option for a fusion reactor. The complex W7-X magnet system requires a multi-level approach to structural analysis for which two types of finite element models are used: Firstly, global models having reasonably coarse meshes with a number of simplifications and assumptions, and secondly, local models with detailed meshes of critical regions and elements. Widely known sub-modelling technique with boundary conditions extracted from the global models is one of the approaches for local analysis with high assessment efficiency. In particular, the winding pack (WP) of the magnet coils is simulated in the global model as a homogeneous orthotropic material with effective mechanical characteristic representing its real composite structure. This assumption allows assessing the whole magnet system in terms of general structural factors like forces and moments on the support elements, displacements of the main components, deformation and stress in the coil casings, etc. In a second step local models with a detailed description of more critical WP zones are considered in order to analyze their internal components like conductor jackets, turn insulation, etc. This paper provides an overview of local analyses of several critical W7-X magnet system components with particular attention on the coil winding packs. (author)

  2. Assessment of genetic divergence in tomato through agglomerative hierarchical clustering and principal component analysis

    International Nuclear Information System (INIS)

    Iqbal, Q.; Saleem, M.Y.; Hameed, A.; Asghar, M.

    2014-01-01

    For the improvement of qualitative and quantitative traits, existence of variability has prime importance in plant breeding. Data on different morphological and reproductive traits of 47 tomato genotypes were analyzed for correlation,agglomerative hierarchical clustering and principal component analysis (PCA) to select genotypes and traits for future breeding program. Correlation analysis revealed significant positive association between yield and yield components like fruit diameter, single fruit weight and number of fruits plant-1. Principal component (PC) analysis depicted first three PCs with Eigen-value higher than 1 contributing 81.72% of total variability for different traits. The PC-I showed positive factor loadings for all the traits except number of fruits plant-1. The contribution of single fruit weight and fruit diameter was highest in PC-1. Cluster analysis grouped all genotypes into five divergent clusters. The genotypes in cluster-II and cluster-V exhibited uniform maturity and higher yield. The D2 statistics confirmed highest distance between cluster- III and cluster-V while maximum similarity was observed in cluster-II and cluster-III. It is therefore suggested that crosses between genotypes of cluster-II and cluster-V with those of cluster-I and cluster-III may exhibit heterosis in F1 for hybrid breeding and for selection of superior genotypes in succeeding generations for cross breeding programme. (author)

  3. Principal component analysis of cardiovascular risk traits in three generations cohort among Indian Punjabi population

    Directory of Open Access Journals (Sweden)

    Badaruddoza

    2015-09-01

    Full Text Available The current study focused to determine significant cardiovascular risk factors through principal component factor analysis (PCFA among three generations on 1827 individuals in three generations including 911 males (378 from offspring, 439 from parental and 94 from grand-parental generations and 916 females (261 from offspring, 515 from parental and 140 from grandparental generations. The study performed PCFA with orthogonal rotation to reduce 12 inter-correlated variables into groups of independent factors. The factors have been identified as 2 for male grandparents, 3 for male offspring, female parents and female grandparents each, 4 for male parents and 5 for female offspring. This data reduction method identified these factors that explained 72%, 84%, 79%, 69%, 70% and 73% for male and female offspring, male and female parents and male and female grandparents respectively, of the variations in original quantitative traits. The factor 1 accounting for the largest portion of variations was strongly loaded with factors related to obesity (body mass index (BMI, waist circumference (WC, waist to hip ratio (WHR, and thickness of skinfolds among all generations with both sexes, which has been known to be an independent predictor for cardiovascular morbidity and mortality. The second largest components, factor 2 and factor 3 for almost all generations reflected traits of blood pressure phenotypes loaded, however, in male offspring generation it was observed that factor 2 was loaded with blood pressure phenotypes as well as obesity. This study not only confirmed but also extended prior work by developing a cumulative risk scale from factor scores. Till today, such a cumulative and extensive scale has not been used in any Indian studies with individuals of three generations. These findings and study highlight the importance of global approach for assessing the risk and need for studies that elucidate how these different cardiovascular risk factors

  4. Principal component analysis of cardiovascular risk traits in three generations cohort among Indian Punjabi population.

    Science.gov (United States)

    Badaruddoza; Kumar, Raman; Kaur, Manpreet

    2015-09-01

    The current study focused to determine significant cardiovascular risk factors through principal component factor analysis (PCFA) among three generations on 1827 individuals in three generations including 911 males (378 from offspring, 439 from parental and 94 from grand-parental generations) and 916 females (261 from offspring, 515 from parental and 140 from grandparental generations). The study performed PCFA with orthogonal rotation to reduce 12 inter-correlated variables into groups of independent factors. The factors have been identified as 2 for male grandparents, 3 for male offspring, female parents and female grandparents each, 4 for male parents and 5 for female offspring. This data reduction method identified these factors that explained 72%, 84%, 79%, 69%, 70% and 73% for male and female offspring, male and female parents and male and female grandparents respectively, of the variations in original quantitative traits. The factor 1 accounting for the largest portion of variations was strongly loaded with factors related to obesity (body mass index (BMI), waist circumference (WC), waist to hip ratio (WHR), and thickness of skinfolds) among all generations with both sexes, which has been known to be an independent predictor for cardiovascular morbidity and mortality. The second largest components, factor 2 and factor 3 for almost all generations reflected traits of blood pressure phenotypes loaded, however, in male offspring generation it was observed that factor 2 was loaded with blood pressure phenotypes as well as obesity. This study not only confirmed but also extended prior work by developing a cumulative risk scale from factor scores. Till today, such a cumulative and extensive scale has not been used in any Indian studies with individuals of three generations. These findings and study highlight the importance of global approach for assessing the risk and need for studies that elucidate how these different cardiovascular risk factors interact with

  5. WRKY Transcription Factors: Key Components in Abscisic Acid Signaling

    Science.gov (United States)

    2011-01-01

    networks that take inputs from numerous stimuli and that they are involved in mediating responses to numerous phytohormones including salicylic acid ... jasmonic acid , ABA and GA. These roles in multiple signalling pathways may in turn partly explain the pleiotropic effects commonly seen when TF genes are...Review article WRKY transcription factors: key components in abscisic acid signalling Deena L. Rushton1, Prateek Tripathi1, Roel C. Rabara1, Jun Lin1

  6. Using Factor Analysis to Identify Topic Preferences Within MBA Courses

    Directory of Open Access Journals (Sweden)

    Earl Chrysler

    2003-02-01

    Full Text Available This study demonstrates the role of a principal components factor analysis in conducting a gap analysis as to the desired characteristics of business alumni. Typically, gap analyses merely compare the emphases that should be given to areas of inquiry with perceptions of actual emphases. As a result, the focus is upon depth of coverage. A neglected area in need of investigation is the breadth of topic dimensions and their differences between the normative (should offer and the descriptive (actually offer. The implications of factor structures, as well as traditional gap analyses, are developed and discussed in the context of outcomes assessment.

  7. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale

    Science.gov (United States)

    Sediyama, Cristina Y. N.; Moura, Ricardo; Garcia, Marina S.; da Silva, Antonio G.; Soraggi, Carolina; Neves, Fernando S.; Albuquerque, Maicon R.; Whiteside, Setephen P.; Malloy-Diniz, Leandro F.

    2017-01-01

    Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale. Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a) urgency, (b) lack of premeditation; (c) lack of perseverance; (d) sensation seeking. In the present study 384 participants (278 women and 106 men), who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis. Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years) scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach’s alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory. Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS. PMID:28484414

  8. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale

    Directory of Open Access Journals (Sweden)

    Leandro F. Malloy-Diniz

    2017-04-01

    Full Text Available Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale.Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a urgency, (b lack of premeditation; (c lack of perseverance; (d sensation seeking. In the present study 384 participants (278 women and 106 men, who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis.Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach’s alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory.Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS.

  9. Factor Analysis of the Brazilian Version of UPPS Impulsive Behavior Scale.

    Science.gov (United States)

    Sediyama, Cristina Y N; Moura, Ricardo; Garcia, Marina S; da Silva, Antonio G; Soraggi, Carolina; Neves, Fernando S; Albuquerque, Maicon R; Whiteside, Setephen P; Malloy-Diniz, Leandro F

    2017-01-01

    Objective: To examine the internal consistency and factor structure of the Brazilian adaptation of the UPPS Impulsive Behavior Scale. Methods: UPPS is a self-report scale composed by 40 items assessing four factors of impulsivity: (a) urgency, (b) lack of premeditation; (c) lack of perseverance; (d) sensation seeking. In the present study 384 participants (278 women and 106 men), who were recruited from schools, universities, leisure centers and workplaces fulfilled the UPPS scale. An exploratory factor analysis was performed by using Varimax factor rotation and Kaiser Normalization, and we also conducted two confirmatory analyses to test the independency of the UPPS components found in previous analysis. Results: Results showed a decrease in mean UPPS total scores with age and this analysis showed that the youngest participants (below 30 years) scored significantly higher than the other groups over 30 years. No difference in gender was found. Cronbach's alpha, results indicated satisfactory values for all subscales, with similar high values for the subscales and confirmatory factor analysis indexes also indicated a poor model fit. The results of two exploratory factor analysis were satisfactory. Conclusion: Our results showed that the Portuguese version has the same four-factor structure of the original and previous translations of the UPPS.

  10. PASCAL, Probabilistic Fracture Mechanics Analysis of Structural Components in Aging LWR

    International Nuclear Information System (INIS)

    Shibata, Katsuyuki; Onizawa, Kunio; Li, Yinsheng; Kato, Daisuke

    2005-01-01

    A - Description of program or function: PASCAL (PFM analysis of Structural Components in Aging LWR) is a PFM (Probabilistic Fracture Mechanics) code for evaluating the failure probability of aged pressure components. PASCAL has been developed as a part of the JAERI's research program on aging and structural integrity of LWR components, in order to respond to the increasing need of the probabilistic methodology in the regulation and inspection of nuclear components with the objective to provide a rational tool for the evaluation of the reliability and integrity of structural components. In order to improve the accuracy and reliability of the analysis code, some new fracture mechanics models or computational techniques are introduced considering the recent progress in the state of the art and performance of PC. Thus some new analysis models and original methodologies were introduced in PASCAL such as the elastic-plastic fracture criterion based on R6 method, a new crack extension model of semi-elliptical crack evaluation and so on. Moreover a function to evaluate the effect of embrittlement recovery by annealing of irradiated RPV is also introduced in the code based on the USNRC R.G. 1.162(1996). The code has been verified through various failure analysis results and international PTS round robin analysis ICAS which had been organized by the Principal Working Group 3 of OECD/NEA/CSNI. In order to attain a high usability, PASCAL Ver.1 with GUI provides an exclusive FEM pre-processor Pre-PASCAL for generating the input load transient data, a GUI system for generating the input data for PASCAL main processor of main solver and post-processor for output data. - Pre-PASCAL: Pre-PASCAL is an exclusive 3-D FEM pre-processor for generating the input transient data provided with 3 RPV mesh models and two simple specimen mesh models, i.e. CT and CCP. Almost the same input data format with that of PASCAL main processor is used. Output data of temperature and stress distribution

  11. What Governs Lorentz Factors of Jet Components in Blazars? Xinwu ...

    Indian Academy of Sciences (India)

    Abstract. We use a sample of radio-loud Active Galactic Nuclei. (AGNs) with measured black hole masses to explore the jet formation mechanisms in these sources. We find a significant correlation between black hole mass and the bulk Lorentz factor of the jet components for this sample, while no significant correlation is ...

  12. The application of iterative transformation factor analysis to resolve multi-component EXAFS spectra of uranium(6) complexes with acetic acid as a function of pH

    International Nuclear Information System (INIS)

    Robberg, A.; Reich, T.

    2002-01-01

    Synchrotron-based EXAFS spectroscopy is a powerful technique to obtain structural information on radionuclide complexes in solution. Depending on the chemical conditions of the samples several radionuclide species can coexist in the solution as is often the case for environmentally related samples. All radionuclide species, which may have different near-neighbour environments, contribute to the measured EXAFS signal. In order to isolate the EXAFS spectra of the individual species (pure spectral components), it is necessary, in a first step, to measure a series of samples where their composition is changed by variation of one physico-chemical parameter (e.g. pH, concentration, etc.). For the spectral decomposition it is necessary that the EXAFS signal change as a function of the chosen physico-chemical parameter. In a second step, the series of EXAFS spectra is analysed with Eigen analysis and Iterative Transformation Factor Analysis (ITFA). As a result of the ITFA one obtains: a) for each sample the relative concentration of the structural distinguishable species and b) their corresponding pure spectral components. From the information obtained in a), one can construct a speciation diagram. The pure spectral components contain the structural information of the individual species, which can be extracted by conventional EXAFS analysis. To evaluate our ITFA algorithm for EXAFS analysis of mixtures, we prepared a series of eight solution samples of 0.05 M uranium(VI) and 1 M acetate (Ac) in the pH range of 0.1 to 4.5. From thermodynamic constants it is known that under these conditions up to four species can occur: uranyl hydrate, and the 1:1, 1:2 and 1:3 complexes of uranyl acetate. The uranium L III -edge EXAFS spectra were measured at room temperature in transmission mode at the Rossendorf Beamline (ROBL) at the ESRF. The average bond length between uranium and the equatorial oxygen atoms (O eq ) increases from 2.40 to 2.46 angstrom with increasing pH. This increase

  13. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  14. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  15. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  16. Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...

    Indian Academy of Sciences (India)

    Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...

  17. Determinants of Return on Assets in Romania: A Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Sorana Vatavu

    2015-03-01

    Full Text Available This paper examines the impact of capital structure, as well as its determinants on the financial performance of Romanian companies listed on the Bucharest Stock Exchange. The analysis is based on cross sectional regressions and factor analysis, and it refers to a ten-year period (2003-2012. Return on assets (ROA is the performance proxy, while the capital structure indicator is debt ratio. Regression results indicate that Romanian companies register higher returns when they operate with limited borrowings. Among the capital structure determinants, tangibility and business risk have a negative impact on ROA, but the level of taxation has a positive effect, showing that companies manage their assets more efficiently during times of higher fiscal pressure. Performance is sustained by sales turnover, but not significantly influenced by high levels of liquidity. Periods of unstable economic conditions, reflected by high inflation rates and the current financial crisis, have a strong negative impact on corporate performance. Based on regression results, three factors were considered through the method of iterated principal component factors: the first one incorporates debt and size, as an indicator of consumption, the second one integrates the influence of tangibility and liquidity, marking the investment potential, and the third one is an indicator of assessed risk, integrating the volatility of earnings with the level of taxation. ROA is significantly influenced by these three factors, regardless the regression method used. The consumption factor has a negative impact on performance, while the investment and risk variables positively influence ROA.

  18. Information technology portfolio in supply chain management using factor analysis

    Directory of Open Access Journals (Sweden)

    Ahmad Jaafarnejad

    2013-11-01

    Full Text Available The adoption of information technology (IT along with supply chain management (SCM has become increasingly a necessity among most businesses. This enhances supply chain (SC performance and helps companies achieve the organizational competitiveness. IT systems capture and analyze information and enable management to make decisions by considering a global scope across the entire SC. This paper reviews the existing literature on IT in SCM and considers pertinent criteria. Using principal component analysis (PCA of factor analysis (FA, a number of related criteria are divided into smaller groups. Finally, SC managers can develop an IT portfolio in SCM using mean values of few extracted components on the relevance –emergency matrix. A numerical example is provided to explain details of the proposed method.

  19. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  20. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  1. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  2. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  3. Using exploratory factor analysis in personality research: Best-practice recommendations

    Directory of Open Access Journals (Sweden)

    Sumaya Laher

    2010-11-01

    Research purpose: This article presents more objective methods to determine the number of factors, most notably parallel analysis and Velicer’s minimum average partial (MAP. The benefits of rotation are also discussed. The article argues for more consistent use of Procrustes rotation and congruence coefficients in factor analytic studies. Motivation for the study: Exploratory factor analysis is often criticised for not being rigorous and objective enough in terms of the methods used to determine the number of factors, the rotations to be used and ultimately the validity of the factor structure. Research design, approach and method: The article adopts a theoretical stance to discuss the best-practice recommendations for factor analytic research in the field of psychology. Following this, an example located within personality assessment and using the NEO-PI-R specifically is presented. A total of 425 students at the University of the Witwatersrand completed the NEO-PI-R. These responses were subjected to a principal components analysis using varimax rotation. The rotated solution was subjected to a Procrustes rotation with Costa and McCrae’s (1992 matrix as the target matrix. Congruence coefficients were also computed. Main findings: The example indicates the use of the methods recommended in the article and demonstrates an objective way of determining the number of factors. It also provides an example of Procrustes rotation with coefficients of agreement as an indication of how factor analytic results may be presented more rigorously in local research. Practical/managerial implications: It is hoped that the recommendations in this article will have best-practice implications for both researchers and practitioners in the field who employ factor analysis regularly. Contribution/value-add: This article will prove useful to all researchers employing factor analysis and has the potential to set the trend for better use of factor analysis in the South African context.

  4. A Principal Component Analysis of 39 Scientific Impact Measures

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert

    2009-01-01

    Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078

  5. A principal component analysis of 39 scientific impact measures.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  6. Principal components analysis of an evaluation of the hemiplegic subject based on the Bobath approach.

    Science.gov (United States)

    Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y

    1992-01-01

    An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.

  7. Principal-component analysis of two-particle azimuthal correlations in PbPb and pPb collisions at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Sirunyan, Albert M; et al.

    2017-08-23

    For the first time a principle-component analysis is used to separate out different orthogonal modes of the two-particle correlation matrix from heavy ion collisions. The analysis uses data from sqrt(s[NN]) = 2.76 TeV PbPb and sqrt(s[NN]) = 5.02 TeV pPb collisions collected by the CMS experiment at the LHC. Two-particle azimuthal correlations have been extensively used to study hydrodynamic flow in heavy ion collisions. Recently it has been shown that the expected factorization of two-particle results into a product of the constituent single-particle anisotropies is broken. The new information provided by these modes may shed light on the breakdown of flow factorization in heavy ion collisions. The first two modes ("leading" and "subleading") of two-particle correlations are presented for elliptical and triangular anisotropies in PbPb and pPb collisions as a function of pt over a wide range of event activity. The leading mode is found to be essentially equivalent to the anisotropy harmonic previously extracted from two-particle correlation methods. The subleading mode represents a new experimental observable and is shown to account for a large fraction of the factorization breaking recently observed at high transverse momentum. The principle-component analysis technique has also been applied to multiplicity fluctuations. These also show a subleading mode. The connection of these new results to previous studies of factorization is discussed.

  8. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  9. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  10. BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Andrea Giovanni Spelta

    2007-05-01

    Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.

  11. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    OpenAIRE

    HRISTIAN Liliana; OSTAFE Maria Magdalena; BORDEIANU Demetra Lacramioara; APOSTOL Laura Liliana

    2017-01-01

    The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA). There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A s...

  12. The search for putative unifying genetic factors for components of the metabolic syndrome

    DEFF Research Database (Denmark)

    Sjögren, M; Lyssenko, V; Jonsson, Anna Elisabet

    2008-01-01

    The metabolic syndrome is a cluster of factors contributing to increased risk of cardiovascular disease and type 2 diabetes but unifying mechanisms have not been identified. Our aim was to study whether common variations in 17 genes previously associated with type 2 diabetes or components...... of the metabolic syndrome and variants in nine genes with inconsistent association with at least two components of the metabolic syndrome would also predict future development of components of the metabolic syndrome, individually or in combination....

  13. Stackable Form-Factor Peripheral Component Interconnect Device and Assembly

    Science.gov (United States)

    Somervill, Kevin M. (Inventor); Ng, Tak-kwong (Inventor); Torres-Pomales, Wilfredo (Inventor); Malekpour, Mahyar R. (Inventor)

    2013-01-01

    A stackable form-factor Peripheral Component Interconnect (PCI) device can be configured as a host controller or a master/target for use on a PCI assembly. PCI device may comprise a multiple-input switch coupled to a PCI bus, a multiplexor coupled to the switch, and a reconfigurable device coupled to one of the switch and multiplexor. The PCI device is configured to support functionality from power-up, and either control function or add-in card function.

  14. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  15. A Principal Component Analysis of Skills and Competencies Required of Quantity Surveyors: Nigerian Perspective

    OpenAIRE

    Oluwasuji Dada, Joshua

    2014-01-01

    The purpose of this paper is to examine the intrinsic relationships among sets of quantity surveyors’ skill and competence variables with a view to reducing them into principal components. The research adopts a data reduction technique using factor analysis statistical technique. A structured questionnaire was administered among major stakeholders in the Nigerian construction industry. The respondents were asked to give rating, on a 5 point Likert scale, on skills and competencies re...

  16. Functional Generalized Structured Component Analysis.

    Science.gov (United States)

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  17. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  18. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  19. Principal Components Analysis of Job Burnout and Coping ...

    African Journals Online (AJOL)

    The key component structure of job burnout were feelings of disgust, insomnia, headaches, weight loss or gain feeling of omniscient, pain of unexplained origin, hopelessness, agitation and workaholics, while the factor structure of coping strategies were development of self realistic picture, retaining hope, asking for help ...

  20. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  1. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    Science.gov (United States)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  2. Multi-spectrometer calibration transfer based on independent component analysis.

    Science.gov (United States)

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  3. 7 CFR 1000.53 - Announcement of class prices, component prices, and advanced pricing factors.

    Science.gov (United States)

    2010-01-01

    ... advanced pricing factors. 1000.53 Section 1000.53 Agriculture Regulations of the Department of Agriculture..., component prices, and advanced pricing factors. (a) On or before the 5th day of the month, the market... administrator for each Federal milk marketing order shall announce the following prices and pricing factors for...

  4. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments

    NARCIS (Netherlands)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode,

  5. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  6. Factors behind international relocation and changes in production geography in the European automobile components industry

    OpenAIRE

    Jesús F. Lampón; Santiago Lago-Peñas

    2013-01-01

    This article analyses business strategies in the automobile sector to determine the key factors behind production relocation processes in automobile components suppliers. These factors help explain changes in production geography in the sector not only in terms of location advantages but also from a perspective of corporate strategies and decision-making mechanisms within firms. The results obtained from an empirical study in Spain during the period 2001-2008 show how the components sector h...

  7. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    Science.gov (United States)

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  8. Principal variance component analysis of crop composition data: a case study on herbicide-tolerant cotton.

    Science.gov (United States)

    Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D

    2013-07-03

    Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.

  9. Influence factors analysis of water environmental quality of main rivers in Tianjin

    Science.gov (United States)

    Li, Ran; Bao, Jingling; Zou, Di; Shi, Fang

    2018-01-01

    According to the evaluation results of the water environment quality of main rivers in Tianjin in 1986-2015, this paper analyzed the current situation of water environmental quality of main rivers in Tianjin retrospectively, established the index system and multiple factors analysis through selecting factors influencing the water environmental quality of main rivers from the economy, industry and nature aspects with the combination method of principal component analysis and linear regression. The results showed that water consumption, sewage discharge and water resources were the main factors influencing the pollution of main rivers. Therefore, optimizing the utilization of water resources, improving utilization efficiency and reducing effluent discharge are important measures to reduce the pollution of surface water environment.

  10. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  11. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  12. The mathematical pathogenetic factors analysis of acute inflammatory diseases development of bronchopulmonary system among infants

    Directory of Open Access Journals (Sweden)

    G. O. Lezhenko

    2017-10-01

    Full Text Available The purpose. To study the factor structure and to establish the associative interaction of pathogenetic links of acute diseases development of the bronchopulmonary system in infants.Materials and methods. The examination group consisted of 59 infants (average age 13.8 ± 1.4 months sick with acute inflammatory bronchopulmonary diseases. Also we tested the level of 25-hydroxyvitamin D (25(ОНD, vitamin D-binding protein, hBPI, cathelicidin LL-37, ß1-defensins, lactoferrin in blood serum with the help of immunoenzymometric analysis. Selection of prognostically important pathogenetic factors of acute bronchopulmonary disease among infants was conducted using ROC-analysis. The procedure for classifying objects was carried out using Hierarchical Cluster Analysis by the method of Centroid-based clustering. Results. Based on the results of the ROC-analysis were selected 15 potential predictors of the development of acute inflammatory diseases of the bronchopulmonary system among infants. The factor analysis made it possible to determine the 6 main components . The biggest influence in the development of the disease was made by "the anemia factor", "the factor of inflammation", "the maternal factor", "the vitamin D supply factor", "the immune factor" and "the phosphorus-calcium exchange factor” with a factor load of more than 0.6. The performed procedure of hierarchical cluster analysis confirmed the initial role of immuno-inflammatory components. The conclusions. The highlighted factors allowed to define a group of parameters, that must be influenced to achieve a maximum effect in carrying out preventive and therapeutic measures. First of all, it is necessary to influence the "the anemia factor" and "the calcium exchange factor", as well as the "the vitamin D supply factor". In other words, to correct vitamin D deficiency and carry out measures aimed at preventing the development of anemia. The prevention and treatment of the pathological course of

  13. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  14. A stable systemic risk ranking in China's banking sector: Based on principal component analysis

    Science.gov (United States)

    Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing

    2018-02-01

    In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.

  15. Experimental investigation of the factors influencing the polymer-polymer bond strength during two component injection moulding

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2007-01-01

    Two component injection moulding is a commercially important manufacturing process and a key technology for Moulded Interconnect Devices (MIDs). Many fascinating applications of two component or multi component polymer parts are restricted due to the weak interfacial adhesion of the polymers...... effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength after moulding are also investigated. The material selections and environmental conditions were chosen based on the suitability of MID production, but the results and discussion presented....... A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi component polymer processing. This paper investigates the effects of the process and material parameters on the bond strength of two component polymer parts and identifies the factors which can...

  16. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  17. Oil classification using X-ray scattering and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T., E-mail: dani.almeida84@gmail.com, E-mail: ricardo@lin.ufrj.br, E-mail: amandass@bioqmed.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil); Oliveira, Davi F.; Anjos, Marcelino J., E-mail: davi.oliveira@uerj.br, E-mail: marcelin@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica Armando Dias Tavares

    2015-07-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  18. Oil classification using X-ray scattering and principal component analysis

    International Nuclear Information System (INIS)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T.; Oliveira, Davi F.; Anjos, Marcelino J.

    2015-01-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  19. An application of principal component analysis to the clavicle and clavicle fixation devices.

    Science.gov (United States)

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-03-26

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  20. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  1. Principal component analysis of the Norwegian version of the quality of life in late-stage dementia scale.

    Science.gov (United States)

    Mjørud, Marit; Kirkevold, Marit; Røsvik, Janne; Engedal, Knut

    2014-01-01

    To investigate which factors the Quality of Life in Late-Stage Dementia (QUALID) scale holds when used among people with dementia (pwd) in nursing homes and to find out how the symptom load varies across the different severity levels of dementia. We included 661 pwd [mean age ± SD, 85.3 ± 8.6 years; 71.4% women]. The QUALID and the Clinical Dementia Rating (CDR) scale were applied. A principal component analysis (PCA) with varimax rotation and Kaiser normalization was applied to test the factor structure. Nonparametric analyses were applied to examine differences of symptom load across the three CDR groups. The mean QUALID score was 21.5 (±7.1), and the CDR scores of the three groups were 1 in 22.5%, 2 in 33.6% and 3 in 43.9%. The results of the statistical measures employed were the following: Crohnbach's α of QUALID, 0.74; Bartlett's test of sphericity, p Kaiser-Meyer-Olkin measure, 0.77. The PCA analysis resulted in three components accounting for 53% of the variance. The first component was 'tension' ('facial expression of discomfort', 'appears physically uncomfortable', 'verbalization suggests discomfort', 'being irritable and aggressive', 'appears calm', Crohnbach's α = 0.69), the second was 'well-being' ('smiles', 'enjoys eating', 'enjoys touching/being touched', 'enjoys social interaction', Crohnbach's α = 0.62) and the third was 'sadness' ('appears sad', 'cries', 'facial expression of discomfort', Crohnbach's α 0.65). The mean score on the components 'tension' and 'well-being' increased significantly with increasing severity levels of dementia. Three components of quality of life (qol) were identified. Qol decreased with increasing severity of dementia. © 2013 S. Karger AG, Basel.

  2. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  3. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  4. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    .) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...

  5. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  6. Prevalence, associated factors and heritabilities of metabolic syndrome and its individual components in African Americans: the Jackson Heart Study.

    Science.gov (United States)

    Khan, Rumana J; Gebreab, Samson Y; Sims, Mario; Riestra, Pia; Xu, Ruihua; Davis, Sharon K

    2015-11-01

    Both environmental and genetic factors play important roles in the development of metabolic syndrome (MetS). Studies about its associated factors and genetic contribution in African Americans (AA) are sparse. Our aim was to report the prevalence, associated factors and heritability estimates of MetS and its components in AA men and women. Data of this cross-sectional study come from a large community-based Jackson Heart Study (JHS). We analysed a total of 5227 participants, of whom 1636 from 281 families were part of a family study subset of JHS. Participants were classified as having MetS according to the Adult Treatment Panel III criteria. Multiple logistic regression analysis was performed to isolate independently associated factors of MetS (n=5227). Heritability was estimated from the family study subset using variance component methods (n=1636). About 27% of men and 40% of women had MetS. For men, associated factors with having MetS were older age, lower physical activity, higher body mass index, and higher homocysteine and adiponectin levels (pmetabolism playing a central role in the development of MetS and encourage additional efforts to identify the underlying susceptibility genes for this syndrome in AA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. SNIa detection in the SNLS photometric analysis using Morphological Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.; Palanque-Delabrouille, N. [Irfu, SPP, CEA Saclay, F-91191 Gif sur Yvette cedex (France); Lanusse, F.; Starck, J.-L., E-mail: anais.moller@cea.fr, E-mail: vanina.ruhlmann-kleider@cea.fr, E-mail: francois.lanusse@cea.fr, E-mail: jeremy.neveu@cea.fr, E-mail: nathalie.palanque-delabrouille@cea.fr, E-mail: jstarck@cea.fr [Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp, CEA Saclay, F-91191 Gif sur Yvette cedex (France)

    2015-04-01

    Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.

  8. A principal components analysis of the factors effecting personal exposure to air pollution in urban commuters in Dublin, Ireland.

    Science.gov (United States)

    McNabola, Aonghus; Broderick, Brian M; Gill, Laurence W

    2009-10-01

    Principal component analysis was used to examine air pollution personal exposure data of four urban commuter transport modes for their interrelationships between pollutants and relationships with traffic and meteorological data. Air quality samples of PM2.5 and VOCs were recorded during peak traffic congestion for the car, bus, cyclist and pedestrian between January 2005 and June 2006 on a busy route in Dublin, Ireland. In total, 200 personal exposure samples were recorded each comprising 17 variables describing the personal exposure concentrations, meteorological conditions and traffic conditions. The data reduction technique, principal component analysis (PCA), was used to create weighted linear combinations of the data and these were subsequently examined for interrelationships between the many variables recorded. The results of the PCA found that personal exposure concentrations in non-motorised forms of transport were influenced to a higher degree by wind speed, whereas personal exposure concentrations in motorised forms of transport were influenced to a higher degree by traffic congestion. The findings of the investigation show that the most effective mechanisms of personal exposure reduction differ between motorised and non-motorised modes of commuter transport.

  9. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  10. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  11. Estimation of physiological parameters using knowledge-based factor analysis of dynamic nuclear medicine image sequences

    International Nuclear Information System (INIS)

    Yap, J.T.; Chen, C.T.; Cooper, M.

    1995-01-01

    The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)

  12. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  13. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  14. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution

    Directory of Open Access Journals (Sweden)

    Xiao-Liang Feng

    2013-01-01

    Full Text Available Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  15. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution.

    Science.gov (United States)

    Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  16. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  17. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    Science.gov (United States)

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  18. An application of principal component analysis to the clavicle and clavicle fixation devices

    Directory of Open Access Journals (Sweden)

    Fitzpatrick David

    2010-03-01

    Full Text Available Abstract Background Principal component analysis (PCA enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and methods Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. Results The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. Discussion And Conclusions This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  19. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  20. Heritable patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Feingold, Eleanor; Wang, Xiaojing; Tcuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Polk, Deborah E; Wendell, Steve; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2012-03-09

    Dental caries is the result of a complex interplay among environmental, behavioral, and genetic factors, with distinct patterns of decay likely due to specific etiologies. Therefore, global measures of decay, such as the DMFS index, may not be optimal for identifying risk factors that manifest as specific decay patterns, especially if the risk factors such as genetic susceptibility loci have small individual effects. We used two methods to extract patterns of decay from surface-level caries data in order to generate novel phenotypes with which to explore the genetic regulation of caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not by intra-oral examination for 1,068 participants aged 18 to 75 years from 664 biological families. Principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori surface classifications, were applied to our data. The three strongest caries patterns identified by PCA recaptured variation represented by DMFS index (correlation, r = 0.97), pit and fissure surface caries (r = 0.95), and smooth surface caries (r = 0.89). However, together, these three patterns explained only 37% of the variability in the data, indicating that a priori caries measures are insufficient for fully quantifying caries variation. In comparison, the first pattern identified by FA was strongly correlated with pit and fissure surface caries (r = 0.81), but other identified patterns, including a second pattern representing caries of the maxillary incisors, were not representative of any previously defined caries indices. Some patterns identified by PCA and FA were heritable (h(2) = 30-65%, p = 0.043-0.006), whereas other patterns were not, indicating both genetic and non-genetic etiologies of individual decay patterns. This study demonstrates the use of decay patterns as novel phenotypes to assist in understanding the multifactorial nature of dental caries.

  1. Determinants of job stress in chemical process industry: A factor analysis approach.

    Science.gov (United States)

    Menon, Balagopal G; Praveensal, C J; Madhu, G

    2015-01-01

    Job stress is one of the active research domains in industrial safety research. The job stress can result in accidents and health related issues in workers in chemical process industries. Hence it is important to measure the level of job stress in workers so as to mitigate the same to avoid the worker's safety related problems in the industries. The objective of this study is to determine the job stress factors in the chemical process industry in Kerala state, India. This study also aims to propose a comprehensive model and an instrument framework for measuring job stress levels in the chemical process industries in Kerala, India. The data is collected through a questionnaire survey conducted in chemical process industries in Kerala. The collected data out of 1197 surveys is subjected to principal component and confirmatory factor analysis to develop the job stress factor structure. The factor analysis revealed 8 factors that influence the job stress in process industries. It is also found that the job stress in employees is most influenced by role ambiguity and the least by work environment. The study has developed an instrument framework towards measuring job stress utilizing exploratory factor analysis and structural equation modeling.

  2. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  3. AMINOSILICA NANO- AND SUBMICROSPHERES: ANALYSIS OF FACTORS INFLUENCING MORPHOLOGY, STRUCTURE AND PROPERTIES

    Directory of Open Access Journals (Sweden)

    Inna Melnyk

    2014-06-01

    Full Text Available Current paper focuses on the analysis of influence of main factors (stages of the synthesis, the ratio of the reacting components, the order of their introduction, the concentration of water and ammonia, the synthesis temperature on the morphology, size and content of functional groups of aminosilicanano- and submicrospheres. The recommendations for the synthesis of particles with predetermined properties were done. It is shown, that the ratio of the reacting components mainly affects the content of 3-aminopropyl functional groups and the temperature of the hydrolytic polycondensation reaction - the size of the particles.

  4. Analysis of factors affecting baseline SF-36 Mental Component Summary in Adult Spinal Deformity and its impact on surgical outcomes.

    Science.gov (United States)

    Mmopelwa, Tiro; Ayhan, Selim; Yuksel, Selcen; Nabiyev, Vugar; Niyazi, Asli; Pellise, Ferran; Alanay, Ahmet; Sanchez Perez Grueso, Francisco Javier; Kleinstuck, Frank; Obeid, Ibrahim; Acaroglu, Emre

    2018-03-01

    To identify the factors that affect SF-36 mental component summary (MCS) in patients with adult spinal deformity (ASD) at the time of presentation, and to analyse the effect of SF-36 MCS on clinical outcomes in surgically treated patients. Prospectively collected data from a multicentric ASD database was analysed for baseline parameters. Then, the same database for surgically treated patients with a minimum of 1-year follow-up was analysed to see the effect of baseline SF-36 MCS on treatment results. A clinically useful SF-36 MCS was determined by ROC Curve analysis. A total of 229 patients with the baseline parameters were analysed. A strong correlation between SF-36 MCS and SRS-22, ODI, gender, and diagnosis were found (p baseline SF-36 MCS (p baseline SF-36 MCS in an ASD population are other HRQOL parameters such as SRS-22 and ODI as well as the baseline thoracic kyphosis and gender. This study has also demonstrated that baseline SF-36 MCS does not necessarily have any effect on the treatment results by surgery as assessed by SRS-22 or ODI. Level III, prognostic study. Copyright © 2018 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  5. Analysis of European Union Economy in Terms of GDP Components

    Directory of Open Access Journals (Sweden)

    Simona VINEREAN

    2013-12-01

    Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.

  6. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    Science.gov (United States)

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  7. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  8. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  9. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  10. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  11. Analysis of diffusivity of the oscillating reaction components in a microreactor system

    Directory of Open Access Journals (Sweden)

    Martina Šafranko

    2017-01-01

    Full Text Available When performing oscillating reactions, periodical changes in the concentrations of reactants, intermediaries, and products take place. Due to the mentioned periodical changes of the concentrations, the information about the diffusivity of the components included into oscillating reactions is very important for the control of the oscillating reactions. Non-linear dynamics makes oscillating reactions very interesting for analysis in different reactor systems. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor, with the aim of identifying the limiting component. The geometry of the microreactor microchannel and a well defined flow profile ensure optimal conditions for the diffusion phenomena analysis, because diffusion profiles in a microreactor depend only on the residence time. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor equipped with 2 Y-shape inlets and 2 Y-shape outlets, with active volume of V = 4 μL at different residence times.

  12. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  13. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  14. Path Analysis of Grain Yield and Yield Components and Some Agronomic Traits in Bread Wheat

    Directory of Open Access Journals (Sweden)

    Mohsen Janmohammadi

    2014-01-01

    Full Text Available Development of new bread wheat cultivars needs efficient tools to monitor trait association in a breeding program. This investigation was aimed to characterize grain yield components and some agronomic traits related to bread wheat grain yield. The efficiency of a breeding program depends mainly on the direction of the correlation between different traits and the relative importance of each component involved in contributing to grain yield. Correlation and path analysis were carried out in 56 bread wheat genotypes grown under field conditions of Maragheh, Iran. Observations were recorded on 18 wheat traits and correlation coefficient analysis revealed grain yield was positively correlated with stem diameter, spike length, floret number, spikelet number, grain diameter, grain length and 1000 seed weight traits. According to the variance inflation factor (VIF and tolerance as multicollinearity statistics, there are inconsistent relationships among the variables and all traits could be considered as first-order variables (Model I with grain yield as the response variable due to low multicollinearity of all measured traits. In the path coefficient analysis, grain yield represented the dependent variable and the spikelet number and 1000 seed weight traits were the independent ones. Our results indicated that the number of spikelets per spikes and leaf width and 1000 seed weight traits followed by the grain length, grain diameter and grain number per spike were the traits related to higher grain yield. The above mentioned traits along with their indirect causal factors should be considered simultaneously as an effective selection criteria evolving high yielding genotype because of their direct positive contribution to grain yield.

  15. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  16. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  17. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  18. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  19. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  20. Investigating product development strategy in beverage industry using factor analysis

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.

  1. Confirmatory Factor Analysis of the Universiti Sains Malaysia Emotional Quotient Inventory Among Medical Students in Malaysia

    Directory of Open Access Journals (Sweden)

    Wan Nor Arifin

    2016-05-01

    Full Text Available The Universiti Sains Malaysia Emotional Quotient Inventory (USMEQ-i is a Malay-language emotional intelligence (EI inventory that was based on a mixed-model approach of EI. It was specifically developed and validated for use among medical course applicants. However, evidence to support its use among medical students is inadequate. This study aims to provide further construct validity evidence for the USMEQ-i among medical students through confirmatory factor analysis (CFA. A cross-sectional study was carried out on a sample of 479 medical students in Universiti Sains Malaysia (USM. After a preliminary analysis, data from only 317 respondents were found suitable for inclusion in CFA. CFA was performed using the maximum likelihood estimation method with bootstrapping due to the nonnormality of items at the multivariate level. The results of the analysis support the two-factor model of the EI component and the one-factor model of the faking component. However, the USMEQ-i should be administered with caution until further cross-validation studies are conducted among students in other medical schools in Malaysia.

  2. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    Science.gov (United States)

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  3. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  4. Factors associated with inadequate receipt of components and use of antenatal care services in Nigeria: a population-based study.

    Science.gov (United States)

    Agho, Kingsley E; Ezeh, Osita K; Ogbo, Felix A; Enoma, Anthony I; Raynes-Greenow, Camille

    2018-05-01

    Antenatal care (ANC) is an essential intervention to improve maternal and child health. In Nigeria, no population-based studies have investigated predictors of poor receipt of components and uptake of ANC at the national level to inform targeted maternal health initiatives. This study aimed to examine factors associated with inadequate receipt of components and use of ANC in Nigeria. The study used information on 20 405 singleton live-born infants of the mothers from the 2013 Nigeria Demographic and Health Survey. Multivariable logistic regression analyses that adjusted for cluster and survey weights were used to determine potential factors associated with inadequate receipt of components and use of ANC. The prevalence of underutilization and inadequate components of ANC were 47.5% (95% CI: 45.2 to 49.9) and 92.6% (95% CI: 91.8 to 93.2), respectively. Common risk factors for underutilization and inadequate components of ANC in Nigeria included residence in rural areas, no maternal education, maternal unemployment, long distance to health facilities and less maternal exposure to the media. Other risk factors for underutilization of ANC were home births and low household wealth. The study suggests that underutilization and inadequate receipt of the components of ANC were associated with amenable factors in Nigeria. Subsidized maternal services and well-guided health educational messages or financial support from the government will help to improve uptake of ANC services.

  5. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  6. Time-domain ultra-wideband radar, sensor and components theory, analysis and design

    CERN Document Server

    Nguyen, Cam

    2014-01-01

    This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...

  7. Study on determination of durability analysis process and fatigue damage parameter for rubber component

    International Nuclear Information System (INIS)

    Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo

    2011-01-01

    Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed

  8. Factor-cluster analysis and enrichment study of Mangrove sediments - An example from Mengkabong, Sabah

    International Nuclear Information System (INIS)

    Praveena, S.M.; Ahmed, A.; Radojevic, M.; Mohd Harun Abdullah; Aris, A.Z.

    2007-01-01

    This paper examines the tidal effects in the sediment of Mengkabong mangrove forest, Sabah. Generally, all the studied parameters showed high value at high tide compared to low tide. Factor-cluster analyses were adopted to allow the identification of controlling factors at high and low tides. Factor analysis extracted six controlling factors at high tide and seven controlling factors at low tide. Cluster analysis extracted two district clusters at high and low tides. The study showed that factor-cluster analysis application is a useful tool to single out the controlling factors at high and low tides. this will provide a basis for describing the tidal effects in the mangrove sediment. The salinity and electrical conductivity clusters as well as component loadings at high and low tide explained the tidal process where there is high contribution of seawater to mangrove sediments that controls the sediment chemistry. The geo accumulation index (T geo ) values suggest the mangrove sediments are having background concentrations for Al, Cu, Fe and Zn and unpolluted for Pb. (author)

  9. Fatigue Analysis of Tubesheet/Shell Juncture Applying the Mitigation Factor for Over-conservatism

    International Nuclear Information System (INIS)

    Kang, Deog Ji; Kim, Kyu Hyoung; Lee, Jae Gon

    2009-01-01

    If the environmental fatigue requirements are applied to the primary components of a nuclear power plant, to which the present ASME Code fatigue curves are applied, some locations with high level CUF (Cumulative Usage Factor) are anticipated not to meet the code criteria. The application of environmental fatigue damage is still particularly controversial for plants with 60-year design lives. Therefore, it is need to develop a detailed fatigue analysis procedure to identify the conservatisms in the procedure and to lower the cumulative usage factor. Several factors are being considered to mitigate the conservatism such as three-dimensional finite element modeling. In the present analysis, actual pressure transient data instead of conservative maximum and minimum pressure data was applied as one of mitigation factors. Unlike in the general method, individual transient events were considered instead of the grouped transient events. The tubesheet/shell juncture in the steam generator assembly is the one of the weak locations and was, therefore, selected as a target to evaluate the mitigation factor in the present analysis

  10. Chemistry of the aqueous medium - Determining factor of corrosion in carbon steel components of secondary circuit

    International Nuclear Information System (INIS)

    Radulescu, M.; Pirvan, I.; Dinu, A.; Velciu, L.

    2003-01-01

    The interplay of chemistry of aqueous medium and corrosion processes followed by deposition and/or release of corrosion products determines both formation and growth of superficial films as well as the kinetics of ion release from materials into the aqueous medium. Material corrosion in the secondary circuit of a NPP can be minimized by choosing the materials of the components and by a rigorous inspection of the chemistry of aqueous agent. The chemical inspection helps in minimizing: - the corrosion of the components immersed in feedwater and vapor and of Steam Generator components; - 'dirtying' of the systems particularly of the surfaces implied in heat transfer; - the amount of insoluble chemical species resulting in corrosion process and carried along the circuit; - the corrosion of secondary circuit components during revisions or outages. An important role among the chemical parameters of the fluids circulated in NPP tubing appears to be the pH. In CANDU reactors it must be kept within the range of 8.7 to 9.4 by treating the medium with volatile amines (morpholine and cyclohexylamine). A plot is presented giving the corrosion rate of carbon steels as a function of the pH of the medium. Besides, the oxygen concentration dissolved in the aqueous medium must be maintained under 5 μg per water kg. Other factors determining the corrosion rates are also discussed. The paper gives the results of the experiments done with various materials, solutions and analysis methods

  11. Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M

    2016-10-03

    Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.

  12. ANALYSIS AND PARTICULARITIES OF EXTERNAL FACTORS IMPACT ON ECONOMICAL RESULTS OF STRATEGIC OBJECTS PLANNING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    V. V. Gromov

    2015-01-01

    Full Text Available Summary. The relevance of the scientific problem described in the article are: to determine changes in economic performance, the effectiveness of the sectoral components of the service sector from the effects of environmental factors, which allows them to reach the planned long-term economic performance; management decision-making about structural and organizational changes, implementation of investment projects in the renovation and modernization of fixed capital, the creation of technology, process and product innovations directly connected with the impact analysis of such external factors as economic, socio-cultural, legal, political, innovative. The structure of the article is formed on the basis of presentation of the impact of specific groups of environmental factors on the competitiveness and economic performance of industry components of services based on the technology of strategic planning; complience of logical sequence of presentation of materials, establishing a causal relationship, the interaction of factors and elements of studied problems and objects. Features of external factors impact on the effectiveness of macro-economic entities, sectoral components of services are to the adequacy of the measures and strategies to counter the negative impact on the economic development of the objects of strategic development. Features of status changes and influence of internal factors on local and sectoral socio-economic systems dictate the need for a part of the available resources, the level of efficiency of the use of labor resources, fixed and current assets. The contribution of the author in a scientific perspective of this topic is to carry out a comprehensive analysis of the impact of the main groups of external factors on economic activities of the service sector development; identifying features of internal factors impact on the economic and innovative development of strategic planning objects.

  13. Principal component analysis of normalized full spectrum mass spectrometry data in multiMS-toolbox: An effective tool to identify important factors for classification of different metabolic patterns and bacterial strains.

    Science.gov (United States)

    Cejnar, Pavel; Kuckova, Stepanka; Prochazka, Ales; Karamonova, Ludmila; Svobodova, Barbora

    2018-06-15

    Explorative statistical analysis of mass spectrometry data is still a time-consuming step. We analyzed critical factors for application of principal component analysis (PCA) in mass spectrometry and focused on two whole spectrum based normalization techniques and their application in the analysis of registered peak data and, in comparison, in full spectrum data analysis. We used this technique to identify different metabolic patterns in the bacterial culture of Cronobacter sakazakii, an important foodborne pathogen. Two software utilities, the ms-alone, a python-based utility for mass spectrometry data preprocessing and peak extraction, and the multiMS-toolbox, an R software tool for advanced peak registration and detailed explorative statistical analysis, were implemented. The bacterial culture of Cronobacter sakazakii was cultivated on Enterobacter sakazakii Isolation Agar, Blood Agar Base and Tryptone Soya Agar for 24 h and 48 h and applied by the smear method on an Autoflex speed MALDI-TOF mass spectrometer. For three tested cultivation media only two different metabolic patterns of Cronobacter sakazakii were identified using PCA applied on data normalized by two different normalization techniques. Results from matched peak data and subsequent detailed full spectrum analysis identified only two different metabolic patterns - a cultivation on Enterobacter sakazakii Isolation Agar showed significant differences to the cultivation on the other two tested media. The metabolic patterns for all tested cultivation media also proved the dependence on cultivation time. Both whole spectrum based normalization techniques together with the full spectrum PCA allow identification of important discriminative factors in experiments with several variable condition factors avoiding any problems with improper identification of peaks or emphasis on bellow threshold peak data. The amounts of processed data remain still manageable. Both implemented software utilities are available

  14. Morphological evaluation of common bean diversity in Bosnia and Herzegovina using the discriminant analysis of principal components (DAPC multivariate method

    Directory of Open Access Journals (Sweden)

    Grahić Jasmin

    2013-01-01

    Full Text Available In order to analyze morphological characteristics of locally cultivated common bean landraces from Bosnia and Herzegovina (B&H, thirteen quantitative and qualitative traits of 40 P. vulgaris accessions, collected from four geographical regions (Northwest B&H, Northeast B&H, Central B&H and Sarajevo and maintained at the Gene bank of the Faculty of Agriculture and Food Sciences in Sarajevo, were examined. Principal component analysis (PCA showed that the proportion of variance retained in the first two principal components was 54.35%. The first principal component had high contributing factor loadings from seed width, seed height and seed weight, whilst the second principal component had high contributing factor loadings from the analyzed traits seed per pod and pod length. PCA plot, based on the first two principal components, displayed a high level of variability among the analyzed material. The discriminant analysis of principal components (DAPC created 3 discriminant functions (DF, whereby the first two discriminant functions accounted for 90.4% of the variance retained. Based on the retained DFs, DAPC provided group membership probabilities which showed that 70% of the accessions examined were correctly classified between the geographically defined groups. Based on the taxonomic distance, 40 common bean accessions analyzed in this study formed two major clusters, whereas two accessions Acc304 and Acc307 didn’t group in any of those. Acc360 and Acc362, as well as Acc324 and Acc371 displayed a high level of similarity and are probably the same landrace. The present diversity of Bosnia and Herzegovina’s common been landraces could be useful in future breeding programs.

  15. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  16. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  17. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  18. Analysis of radiation and chemical factors which define the ecological situation of environment

    International Nuclear Information System (INIS)

    Trofimenko, A.P.

    1996-01-01

    A new method of large information set statistical analysis is proposed. It permits to define the main directions of work in a given field in the world or in a particular country, to find the most important investigated problems and to evaluate the role each of them quantitatively, as well as to study the dynamics of work development in time, the methods of research used, the centres in which this research is mostly developed, authors of publications etc. Statistical analysis may be supplemented with subject analysis of selected publications. Main factors which influence on different environment components and on public health are presented as an example of this method use, and the role of radiation and chemical factors is evaluated. 18 refs., 6 tab

  19. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  20. Development of safety factors to be used for evaluation of cracked nuclear components

    International Nuclear Information System (INIS)

    Brickstad, B.; Bergman, M.

    1996-10-01

    A modified concept for safety evaluation is introduced which separately accounts for the failure mechanisms fracture and plastic collapse. For application on nuclear components a set of safety factors are also proposed that retain the safety margins expressed in ASME, section III and XI. By performing comparative studies of the acceptance levels for surface cracks in pipes and a pressure vessel, it is shown that some of the anomalies connected with the old safety procedures are removed. It is the authors belief that the outlined safety evaluation procedure has the capability of treating cracks in a consistent way and that the procedure together with the proposed safety factors fulfill the basic safety requirements for nuclear components. Hopefully, it is possible in the near future to develop a probabilistic safety assessment procedure in Sweden, which enables a systematic treatment of uncertainties in the involved data. 14 refs

  1. Automatic scatter detection in fluorescence landscapes by means of spherical principal component analysis

    DEFF Research Database (Denmark)

    Kotwa, Ewelina Katarzyna; Jørgensen, Bo Munk; Brockhoff, Per B.

    2013-01-01

    In this paper, we introduce a new method, based on spherical principal component analysis (S‐PCA), for the identification of Rayleigh and Raman scatters in fluorescence excitation–emission data. These scatters should be found and eliminated as a prestep before fitting parallel factor analysis...... models to the data, in order to avoid model degeneracies. The work is inspired and based on a previous research, where scatter removal was automatic (based on a robust version of PCA called ROBPCA) and required no visual data inspection but appeared to be computationally intensive. To overcome...... this drawback, we implement the fast S‐PCA in the scatter identification routine. Moreover, an additional pattern interpolation step that complements the method, based on robust regression, will be applied. In this way, substantial time savings are gained, and the user's engagement is restricted to a minimum...

  2. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  3. Meta-analysis Number of Plants Drugs Used by Characteristics Socioeconomic Factors, Environmental and Geographic

    Directory of Open Access Journals (Sweden)

    Febiola Diah Pratiwi

    2017-09-01

    Full Text Available Ethnobotany is the study of public relations with the use of plants. Use of plants by people influenced by several factors, such as social, cultural, socioeconomic, and geographic. Most of the ethnicities in Indonesia has a high dependence on plants medicine for survival. However, the factors that influence the use of medicinal plants by people in Indonesia have not been studied, so that research is needed to optimize the use of medicinal plants to sustainability benefits. The purpose of this study is to analyze the number of species of plants medicine used by the influence of socio-economic, environmental, and geographic factors using principal component analysis and analyzing patterns of use of plants medicine. The results showed that the economy and infrastructure components (access to electricity, means of education, income level, health facilities, distance from the highway, remoteness, and the fastest time toward the road and the number of people graduating from elementary school affect the number of medicinal plant species used. Based on the results of the study of literature and field observations, the pattern of use of plants medicine in addition to be used as medicine, the plant is used for food, building materials, plant ornamental, ceremonial, wood, wicker and crafts, coloring agents, animal feed, ingredients aromatic, and pesticide. The usage patterns in each region or village has the distinction of which is influenced by the remoteness factor due to the differences in the social, economic, environmental, and geographic.  Keywords: ethnobotany, plants medicine, principal component analysis

  4. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  5. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  6. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  7. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    Science.gov (United States)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  8. The role of damage analysis in the assessment of service-exposed components

    International Nuclear Information System (INIS)

    Bendick, W.; Muesch, H.; Weber, H.

    1987-01-01

    Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)

  9. Preliminary factor analysis of the O’Kelly Women Beliefs Scale in a US sample

    Directory of Open Access Journals (Sweden)

    Arturo Heman Contreras

    2012-06-01

    Full Text Available Using a Rational Emotive Behavior Therapy framework, the O’Kelly Women Beliefs Scale (O’Kelly, in press was originally constructed in Australia to measure sex-role beliefs women may develop through sex-role stereotyping. Factor analysis of the 92 original items showed that 64 items loaded into a single component that accounted for 18.2% of the variance in a sample of 974 Australian women. The present exploratory study examined the psychometric properties of the OWBS in a sample of 202 women born and living in the US. A varimax rotation with cutoff eigenvalues of 3, showed that 37 items loaded into 3 components which accounted for 58.48% of the variance. The items were subsequently grouped into two factors: Ir- rationality, with a total of 27 items was created by merging component 1 and 3 (Pearson’s r = 0.8 between them, and Rationality, with the 10 items from component 2. Analyses indicated a Cronbach’s alpha of 0.91 for Fac- tor 1, and a Cronbach’s alpha 0.74 for Factor 2. Results indicate that this version of the instrument may be used to evaluate both the rational and irrational content of sex-role beliefs of women born in the US.

  10. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  11. Analysis of spiral components in 16 galaxies

    International Nuclear Information System (INIS)

    Considere, S.; Athanassoula, E.

    1988-01-01

    A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies

  12. An analysis of few-body cross sections in pp and πp interactions in terms of the two-component picture

    International Nuclear Information System (INIS)

    Karimaeki, V.

    1975-01-01

    The energy behaviour of total cross sections of exclusive channel with one, two or three produced pions has been studied in pp and πp interactions. Two components, interpreted as diffractive and non-diffractive, have been fitted to the cross section data assuming an asymptotic power law dependence in psub(lab) for both. Isotopic spin factors were used as constraints to fit different charge channels simultaneously as well as for determining diffractive cross sections for non-observable few-body channels. The diffractive component for fixed multiplicity is found to decrease as psub(lab)sup(-0.16+-0.04). Results are compared with the predictions of factorization and semilocal factorization hypotheses. Total diffractive cross sections derived by the analysis are 5.1+-0.6 mb in pp and 2.2+-0.3 mb in πp interactions at psub(lab)=10 GeV/c. (author)

  13. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  14. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  15. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  16. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  17. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  18. Dynamic analysis of the radiolysis of binary component system

    International Nuclear Information System (INIS)

    Katayama, M.; Trumbore, C.N.

    1975-01-01

    Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)

  19. Application of SWAT99.2 to sensitivity analysis of water balance components in unique plots in a hilly region

    Directory of Open Access Journals (Sweden)

    Jun-feng Dai

    2017-07-01

    Full Text Available Although many sensitivity analyses using the soil and water assessment tool (SWAT in a complex watershed have been conducted, little attention has been paid to the application potential of the model in unique plots. In addition, sensitivity analysis of percolation and evapotranspiration with SWAT has seldom been undertaken. In this study, SWAT99.2 was calibrated to simulate water balance components for unique plots in Southern China from 2000 to 2001, which included surface runoff, percolation, and evapotranspiration. Twenty-one parameters classified into four categories, including meteorological conditions, topographical characteristics, soil properties, and vegetation attributes, were used for sensitivity analysis through one-at-a-time (OAT sampling to identify the factor that contributed most to the variance in water balance components. The results were shown to be different for different plots, with parameter sensitivity indices and ranks varying for different water balance components. Water balance components in the broad-leaved forest and natural grass plots were most sensitive to meteorological conditions, less sensitive to vegetation attributes and soil properties, and least sensitive to topographical characteristics. Compared to those in the natural grass plot, water balance components in the broad-leaved forest plot demonstrated higher sensitivity to the maximum stomatal conductance (GSI and maximum leaf area index (BLAI.

  20. The Psychometric Assessment of Children with Learning Disabilities: An Index Derived from a Principal Components Analysis of the WISC-R.

    Science.gov (United States)

    Lawson, J. S.; Inglis, James

    1984-01-01

    A learning disability index (LDI) for the assessment of intellectual deficits on the Wechsler Intelligence Scale for Children-Revised (WISC-R) is described. The Factor II score coefficients derived from an unrotated principal components analysis of the WISC-R normative data, in combination with the individual's scaled scores, are used for this…

  1. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  2. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  3. Factor analysis of Wechsler Adult Intelligence Scale-Revised in developmentally disabled persons.

    Science.gov (United States)

    Di Nuovo, Santo F; Buono, Serafino

    2006-12-01

    The results of previous studies on the factorial structure of Wechsler Intelligence Scales are somewhat inconsistent across normal and pathological samples. To study specific clinical groups, such as developmentally disabled persons, it is useful to examine the factor structure in appropriate samples. A factor analysis was carried out using the principal component method and the Varimax orthogonal rotation on the Wechsler Adult Intelligence Scale (WAIS-R) in a sample of 203 developmentally disabled persons, with a mean age of 25 years 4 months. Developmental disability ranged from mild to moderate. Partially contrasting with previous studies on normal samples, results found a two-factor solution. Wechsler's traditional Verbal and Performance scales seems to be more appropriate for this sample than the alternative three-factor solution.

  4. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  5. Numerical analysis of magnetoelastic coupled buckling of fusion reactor components

    International Nuclear Information System (INIS)

    Demachi, K.; Yoshida, Y.; Miya, K.

    1994-01-01

    For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated

  6. Demographic, socioeconomic, and behavioral factors affecting patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Polk, Deborah E; Feingold, Eleanor; Wang, Xiaojing; Cuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2013-08-01

    Dental caries of the permanent dentition is a multifactorial disease resulting from the complex interplay of endogenous and environmental risk factors. The disease is not easily quantitated due to the innumerable possible combinations of carious lesions across individual tooth surfaces of the permanent dentition. Global measures of decay, such as the DMFS index (which was developed for surveillance applications), may not be optimal for studying the epidemiology of dental caries because they ignore the distinct patterns of decay across the dentition. We hypothesize that specific risk factors may manifest their effects on specific tooth surfaces leading to patterns of decay that can be identified and studied. In this study, we utilized two statistical methods of extracting patterns of decay from surface-level caries data to create novel phenotypes with which to study the risk factors affecting dental caries. Intra-oral dental examinations were performed on 1068 participants aged 18-75 years to assess dental caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not and used as input for principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori knowledge of the patterns. Demographic (age, sex, birth year, race/ethnicity, and educational attainment), anthropometric (height, body mass index, waist circumference), endogenous (saliva flow), and environmental (tooth brushing frequency, home water source, and home water fluoride) risk factors were tested for association with the caries patterns identified by PCA and FA, as well as DMFS, for comparison. The ten strongest patterns (i.e. those that explain the most variation in the data set) extracted by PCA and FA were considered. The three strongest patterns identified by PCA reflected (i) global extent of decay (i.e. comparable to DMFS index), (ii) pit and fissure surface caries and (iii) smooth surface caries, respectively. The

  7. A further component analysis for illicit drugs mixtures with THz-TDS

    Science.gov (United States)

    Xiong, Wei; Shen, Jingling; He, Ting; Pan, Rui

    2009-07-01

    A new method for quantitative analysis of mixtures of illicit drugs with THz time domain spectroscopy was proposed and verified experimentally. In traditional method we need fingerprints of all the pure chemical components. In practical as only the objective components in a mixture and their absorption features are known, it is necessary and important to present a more practical technique for the detection and identification. Our new method of quantitatively inspect of the mixtures of illicit drugs is developed by using derivative spectrum. In this method, the ratio of objective components in a mixture can be obtained on the assumption that all objective components in the mixture and their absorption features are known but the unknown components are not needed. Then methamphetamine and flour, a illicit drug and a common adulterant, were selected for our experiment. The experimental result verified the effectiveness of the method, which suggested that it could be an effective method for quantitative identification of illicit drugs. This THz spectroscopy technique is great significant in the real-world applications of illicit drugs quantitative analysis. It could be an effective method in the field of security and pharmaceuticals inspection.

  8. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  9. Tracking polychlorinated biphenyls (PCBs) congener patterns in Newark Bay surface sediment using principal component analysis (PCA) and positive matrix factorization (PMF).

    Science.gov (United States)

    Saba, Tarek; Su, Steave

    2013-09-15

    PCB congener data for Newark Bay surface sediments were analyzed using PCA and PMF, and relationships between the outcomes from these two techniques were explored. The PCA scores plot separated the Lower Passaic River Mouth samples from North Newark Bay, thus indicating dissimilarity. Although PCA was able to identify subareas in the Bay system with specific PCB congener patterns (e.g., higher chlorinated congeners in Elizabeth River), further conclusions reading potential PCB source profiles or potential upland source areas were not clear for the PCA scores plot. PMF identified five source factors, and explained the Bay sample congener profiles as a mix of these Factors. This PMF solution was equivalent to (1) defining an envelope that encompasses all samples on the PCA scores plot, (2) defining source factors that plot on that envelope, and (3) explaining the congener profile for each Bay sediment sample (inside the scores plot envelope) as a mix of factors. PMF analysis allowed identifying characteristic features in the source factor congener distributions that allowed tracking of source factors to shoreline areas where PCB inputs to the Bay may have originated. The combined analysis from PCA and PMF showed that direct discharges to the Bay are likely the dominant sources of PCBs to the sediment. Review of historical upland activities and regulatory files will be needed, in addition to the PCA and PMF analysis, to fully reconstruct the history of operations and PCB releases around the Newark Bay area that impacted the Bay sediment. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  11. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  12. Creative design-by-analysis solutions applied to high-temperature components

    International Nuclear Information System (INIS)

    Dhalla, A.K.

    1993-01-01

    Elevated temperature design has evolved over the last two decades from design-by-formula philosophy of the ASME Boiler and Pressure Vessel Code, Sections I and VIII (Division 1), to the design-by-analysis philosophy of Section III, Code Case N-47. The benefits of design-by-analysis procedures, which were developed under a US-DOE-sponsored high-temperature structural design (HTSD) program, are illustrated in the paper through five design examples taken from two U.S. liquid metal reactor (LMR) plants. Emphasis in the paper is placed upon the use of a detailed, nonlinear finite element analysis method to understand the structural response and to suggest design optimization so as to comply with Code Case N-47 criteria. A detailed analysis is cost-effective, if selectively used, to qualify an LMR component for service when long-lead-time structural forgings, procured based upon simplified preliminary analysis, do not meet the design criteria, or the operational loads are increased after the components have been fabricated. In the future, the overall costs of a detailed analysis will be reduced even further with the availability of finite element software used on workstations or PCs

  13. Principal component analysis to assess the efficiency and mechanism for enhanced coagulation of natural algae-laden water using a novel dual coagulant system.

    Science.gov (United States)

    Ou, Hua-Se; Wei, Chao-Hai; Deng, Yang; Gao, Nai-Yun; Ren, Yuan; Hu, Yun

    2014-02-01

    A novel dual coagulant system of polyaluminum chloride sulfate (PACS) and polydiallyldimethylammonium chloride (PDADMAC) was used to treat natural algae-laden water from Meiliang Gulf, Lake Taihu. PACS (Aln(OH)mCl3n-m-2k(SO4)k) has a mass ratio of 10 %, a SO4 (2-)/Al3 (+) mole ratio of 0.0664, and an OH/Al mole ratio of 2. The PDADMAC ([C8H16NCl]m) has a MW which ranges from 5 × 10(5) to 20 × 10(5) Da. The variations of contaminants in water samples during treatments were estimated in the form of principal component analysis (PCA) factor scores and conventional variables (turbidity, DOC, etc.). Parallel factor analysis determined four chromophoric dissolved organic matters (CDOM) components, and PCA identified four integrated principle factors. PCA factor 1 had significant correlations with chlorophyll-a (r=0.718), protein-like CDOM C1 (0.689), and C2 (0.756). Factor 2 correlated with UV254 (0.672), humic-like CDOM component C3 (0.716), and C4 (0.758). Factors 3 and 4 had correlations with NH3-N (0.748) and T-P (0.769), respectively. The variations of PCA factors scores revealed that PACS contributed less aluminum dissolution than PAC to obtain equivalent removal efficiency of contaminants. This might be due to the high cationic charge and pre-hydrolyzation of PACS. Compared with PACS coagulation (20 mg L(-1)), the removal of PCA factors 1, 2, and 4 increased 45, 33, and 12 %, respectively, in combined PACS-PDADMAC treatment (0.8 mg L(-1) +20 mg L(-1)). Since PAC contained more Al (0.053 g/1 g) than PACS (0.028 g/1 g), the results indicated that PACS contributed less Al dissolution into the water to obtain equivalent removal efficiency.

  14. Principal component analysis of the main factors of line intensity enhancements observed in oscillating direct current plasma

    International Nuclear Information System (INIS)

    Stoiljkovic, Milovan M.; Pasti, Igor A.; Momcilovic, Milos D.; Savovic, Jelena J.; Pavlovic, Mirjana S.

    2010-01-01

    Enhancement of emission line intensities by induced oscillations of direct current (DC) arc plasma with continuous aerosol sample supply was investigated using multivariate statistics. Principal component analysis (PCA) was employed to evaluate enhancements of 34 atomic spectral lines belonging to 33 elements and 35 ionic spectral lines belonging to 23 elements. Correlation and classification of the elements were done not only by a single property such as the first ionization energy, but also by considering other relevant parameters. Special attention was paid to the influence of the oxide bond strength in an attempt to clarify/predict the enhancement effect. Energies of vaporization, atomization, and excitation were also considered in the analysis. In the case of atomic lines, the best correlation between the enhancements and first ionization energies was obtained as a negative correlation, with weak consistency in grouping of elements in score plots. Conversely, in the case of ionic lines, the best correlation of the enhancements with the sum of the first ionization energies and oxide bond energies was obtained as a positive correlation, with four distinctive groups of elements. The role of the gas-phase atom-oxide bond energy in the entire enhancement effect is underlined.

  15. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    Science.gov (United States)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  16. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  17. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  18. A method for independent component graph analysis of resting-state fMRI

    DEFF Research Database (Denmark)

    de Paula, Demetrius Ribeiro; Ziegler, Erik; Abeyasinghe, Pubuditha M.

    2017-01-01

    Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguou......Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non......-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Objective Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. Methods First, ICA was performed at the single-subject level in 15 healthy...... parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Results Network graph comparison between the classically constructed network...

  19. Discriminatory components retracing strategy for monitoring the preparation procedure of Chinese patent medicines by fingerprint and chemometric analysis.

    Directory of Open Access Journals (Sweden)

    Shuai Yao

    Full Text Available Chinese patent medicines (CPM, generally prepared from several traditional Chinese medicines (TCMs in accordance with specific process, are the typical delivery form of TCMs in Asia. To date, quality control of CPMs has typically focused on the evaluation of the final products using fingerprint technique and multi-components quantification, but rarely on monitoring the whole preparation process, which was considered to be more important to ensure the quality of CPMs. In this study, a novel and effective strategy labeling "retracing" way based on HPLC fingerprint and chemometric analysis was proposed with Shenkang injection (SKI serving as an example to achieve the quality control of the whole preparation process. The chemical fingerprints were established initially and then analyzed by similarity, principal component analysis (PCA and partial least squares-discriminant analysis (PLS-DA to evaluate the quality and to explore discriminatory components. As a result, the holistic inconsistencies of ninety-three batches of SKIs were identified and five discriminatory components including emodic acid, gallic acid, caffeic acid, chrysophanol-O-glucoside, and p-coumaroyl-O-galloyl-glucose were labeled as the representative targets to explain the retracing strategy. Through analysis of the targets variation in the corresponding semi-products (ninety-three batches, intermediates (thirty-three batches, and the raw materials, successively, the origins of the discriminatory components were determined and some crucial influencing factors were proposed including the raw materials, the coextraction temperature, the sterilizing conditions, and so on. Meanwhile, a reference fingerprint was established and subsequently applied to the guidance of manufacturing. It was suggested that the production process should be standardized by taking the concentration of the discriminatory components as the diagnostic marker to ensure the stable and consistent quality for multi

  20. Summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.

    2004-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data

  1. Fatigue characterization of mechanical components in service

    Directory of Open Access Journals (Sweden)

    G. Fargione

    2013-10-01

    Full Text Available The quickly identify of fatigue limit of a mechanical component with good approximation is currently a significant practical problem not yet resolved in a satisfactory way. Generally, for a mechanical component, the fatigue strength reduction factor (i is difficult to evaluate especially when it is in service.In this paper, the procedures for crack paths individuation and consequently damage evaluation (adopted in laboratory for stressed specimens with planned load histories are applied to mechanical components, already failed during service. The energy parameters, proposed by the authors for the evaluation of the fatigue behavior of the materials [1-5], are defined on specimens derived from a flange bolts. The flange connecting pipes at high temperature and pressure. Due to the loss of the seal, the bolts have been subjected to a hot flow steam addition to the normal stress.The numerical analysis coupled experimental analysis (measurement of surface temperature during static and dynamic tests of specimens taken from damaged tie rods, has helped to determine the causes of failure of the tie rods.The determination of an energy parameter for the evaluation of the damage showed that factors related to the heat release of the material (loaded may also help to understand the causes of failure of mechanical components.

  2. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  3. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  4. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  5. Radionuclide X-ray fluorescence analysis of components of the environment

    International Nuclear Information System (INIS)

    Toelgyessy, J.; Havranek, E.; Dejmkova, E.

    1983-12-01

    The physical foundations and methodology are described of radionuclide X-ray fluorescence analysis. The sources are listed of air, water and soil pollution, and the transfer of impurities into biological materials is described. A detailed description is presented of the sampling of air, soil and biological materials and their preparation for analysis. Greatest attention is devoted to radionuclide X-ray fluorescence analysis of the components of the environment. (ES)

  6. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  7. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  8. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  9. Path and correlation analysis of perennial ryegrass (Lolium perenne L.) seed yield components

    DEFF Research Database (Denmark)

    Abel, Simon; Gislum, René; Boelt, Birte

    2017-01-01

    Maximum perennial ryegrass seed production potential is substantially greater than harvested yields with harvested yields representing only 20% of calculated potential. Similar to wheat, maize and other agriculturally important crops, seed yield is highly dependent on a number of interacting seed...... yield components. This research was performed to apply and describe path analysis of perennial ryegrass seed yield components in relation to harvested seed yields. Utilising extensive yield components which included subdividing reproductive inflorescences into five size categories, path analysis...... was undertaken assuming a unidirectional causal-admissible relationship between seed yield components and harvested seed yield in six commercial seed production fields. Both spikelets per inflorescence and florets per spikelet had a significant (p seed yield; however, total...

  10. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  11. Multivariate factor analysis of Girgentana goat milk composition

    Directory of Open Access Journals (Sweden)

    Pietro Giaccone

    2010-01-01

    Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of  correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs  to the multivariate groups; for our study this particular statistical approach was employed.  A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July,  and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the  normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat  and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana  milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min-  utes and a curd firmness of 25.08 ± 7.67 millimetres.  Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the  milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was  defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total  covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con-  tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari-  ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim  of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal-  ysed with the mixed linear model. Results showed significant effects of the season of

  12. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  13. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM

    Czech Academy of Sciences Publication Activity Database

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-01-01

    Roč. 45, č. 5 (2016), s. 443-461 ISSN 0175-7571 R&D Projects: GA ČR(CZ) GA13-02033S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:67985823 Keywords : 3D object segmentation * Delaunay algorithm * principal component analysis * 3D super-resolution microscopy * nucleoids * mitochondrial DNA replication Subject RIV: BO - Biophysics Impact factor: 1.472, year: 2016

  14. Representation for dialect recognition using topographic independent component analysis

    Science.gov (United States)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  15. Sensitivity analysis on the component cooling system of the Angra 1 NPP

    International Nuclear Information System (INIS)

    Castro Silva, Luiz Euripedes Massiere de

    1995-01-01

    The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs

  16. Factor analysis for the adoption of nuclear technology in diagnosis and treatment of chronic diseases

    International Nuclear Information System (INIS)

    Sato, Renato Cesar; Zouain, Desiree Moraes

    2012-01-01

    To identify and evaluate latent variables (variables that are not directly observed) for adopting and using nuclear technologies in diagnosis and treatment of chronic diseases. The measurement and management of these latent factors are important for health care due to complexities of the sector. Methods: An exploratory factor analysis study was conducted among 52 physicians practicing in the areas of Cardiology, Neurology and Oncology in the State of Sao Paulo who agreed to participate in the study between 2009 and 2010. Data were collected using an attitude measurement questionnaire, and analyzed according to the principal component method with Varimax rotation. Results: The component matrix after factor rotation showed three elucidative groups arranged according to demand for nuclear technology: clinical factors, structural factors, and technological factors. Clinical factors included questionnaire answers referring to medical history, previous interventions, complexity and chronicity of the disease. Structural factors included patient age, physician's practice area, and payment ability. Technological factors included prospective growth in the use of nuclear technology and availability of services. Conclusions: The clinical factors group dimension identified in the study included patient history, prior interventions, and complexity and chronicity of the disease. This dimension is the main motivating for adopting nuclear technology in diagnosis and treatment of chronic diseases. (author)

  17. Non-Double-Couple Component Analysis of Induced Microearthquakes in the Val D'Agri Basin (Italy)

    Science.gov (United States)

    Roselli, P.; Improta, L.; Saccorotti, G.

    2017-12-01

    In recent years it has become accepted that earthquake source can attain significant Non-Double-Couple (NDC) components. Among the driving factors of deviation from normal double-couple (DC) mechanisms there is the opening/closing of fracture networks and the activation of pre-existing faults by pore fluid pressure perturbations. This observation makes the thorough analysis of source mechanism of key importance for the understanding of withdrawal/injection induced seismicity from geothermal and hydrocarbon reservoirs, as well as of water reservoir induced seismicity. In addition to the DC component, seismic moment tensor can be decomposed into isotropic (ISO) and compensated linear vector dipole (CLVD) components. In this study we performed a careful analysis of the seismic moment tensor of induced microseismicity recorded in the Val d'Agri (Southern Apennines, Italy) focusing our attention on the NDC component. The Val d'Agri is a Quaternary extensional basin that hosts the largest onshore European oil field and a water reservoir (Pertusillo Lake impoundment) characterized by severe seasonal level oscillations. Our input data-set includes swarm-type induced micro-seismicity recorded between 2005-2006 by a high-performance network and accurately localized by a reservoir-scale local earthquake tomography. We analyze two different seismicity clusters: (i) a swarm of 69 earthquakes with 0.3 ≤ ML ≤ 1.8 induced by a wastewater disposal well of the oilfield during the initial daily injection tests (10 days); (ii) 526 earthquakes with -0.2 ≤ ML ≤ 2.7 induced by seasonal volume changes of the artificial lake. We perform the seismic moment tensor inversion by using HybridMT code. After a very accurate signal-to-noise selection and hand-made picking of P-pulses, we obtain %DC, %ISO, %CLVD for each event. DC and NDC components are analyzed and compared with the spatio-temporal distribution of seismicity, the local stress field, the injection parameters and the water

  18. Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Liying Wang

    2017-01-01

    Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.

  19. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  20. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  1. Factor Analysis of Drawings: Application to college student models of the greenhouse effect

    Science.gov (United States)

    Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel

    2015-09-01

    Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.

  2. A novel normalization method based on principal component analysis to reduce the effect of peak overlaps in two-dimensional correlation spectroscopy

    Science.gov (United States)

    Wang, Yanwei; Gao, Wenying; Wang, Xiaogong; Yu, Zhiwu

    2008-07-01

    Two-dimensional correlation spectroscopy (2D-COS) has been widely used to separate overlapped spectroscopic bands. However, band overlap may sometimes cause misleading results in the 2D-COS spectra, especially if one peak is embedded within another peak by the overlap. In this work, we propose a new normalization method, based on principal component analysis (PCA). For each spectrum under discussion, the first principal component of PCA is simply taken as the normalization factor of the spectrum. It is demonstrated that the method works well with simulated dynamic spectra. Successful result has also been obtained from the analysis of an overlapped band in the wavenumber range 1440-1486 cm -1 for the evaporation process of a solution containing behenic acid, methanol, and chloroform.

  3. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  4. Confirmatory Factor Analysis of IT-based Competency Questionnaire in Information Science & Knowledge Studies, Based on Job Market Analysis

    Directory of Open Access Journals (Sweden)

    Rahim Shahbazi

    2016-03-01

    Full Text Available The main purpose of the present research is to evaluate the validity of an IT-based competency questionnaire in Information Science & Knowledge Studies. The Survey method has been used in the present research. A data collection tool has been a researcher-made questionnaire. Statistic samples, which are 315 people, have been chosen purposefully from among Iranian faculty members, Ph.D. students, and information center employees. The findings showed that by eliminating 17 items from the whole questionnaire and Confirmatory Factor Analysis of the rest and rotating findings using the Varimax method, 8 Factors were revealed. The resulting components and also the items which had a high load factor with these components were considerably consistent with the classifications in the questionnaire and partly consistent with the findings of other researchers. 76 competency indicators (knowledge, skills, and attitudes were validated and grouped under 8 main categories: 1. “Computer Basics” 2. “Database Operating, Collection Development of Digital Resources, & Digital Library Management” 3. “Basics of Computer Networking” 4. “Basics of Programming & Database Designing” 5. “Web Designing & Web Content Analysis” 6. “Library Software & Computerized Organizing” 7. Archive of Digital Resources and 8. Attitudes.

  5. [Principal component analysis and cluster analysis of inorganic elements in sea cucumber Apostichopus japonicus].

    Science.gov (United States)

    Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie

    2011-11-01

    The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.

  6. Dynamics and spatio-temporal variability of environmental factors in Eastern Australia using functional principal component analysis

    Science.gov (United States)

    Szabo, J.K.; Fedriani, E.M.; Segovia-Gonzalez, M. M.; Astheimer, L.B.; Hooper, M.J.

    2010-01-01

    This paper introduces a new technique in ecology to analyze spatial and temporal variability in environmental variables. By using simple statistics, we explore the relations between abiotic and biotic variables that influence animal distributions. However, spatial and temporal variability in rainfall, a key variable in ecological studies, can cause difficulties to any basic model including time evolution. The study was of a landscape scale (three million square kilometers in eastern Australia), mainly over the period of 19982004. We simultaneously considered qualitative spatial (soil and habitat types) and quantitative temporal (rainfall) variables in a Geographical Information System environment. In addition to some techniques commonly used in ecology, we applied a new method, Functional Principal Component Analysis, which proved to be very suitable for this case, as it explained more than 97% of the total variance of the rainfall data, providing us with substitute variables that are easier to manage and are even able to explain rainfall patterns. The main variable came from a habitat classification that showed strong correlations with rainfall values and soil types. ?? 2010 World Scientific Publishing Company.

  7. Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.

    Science.gov (United States)

    Zhang, Sheng; Li, Chiang-Shan R

    2017-11-01

    As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.

  8. Effect of abiotic and biotic stress factors analysis using machine learning methods in zebrafish.

    Science.gov (United States)

    Gutha, Rajasekar; Yarrappagaari, Suresh; Thopireddy, Lavanya; Reddy, Kesireddy Sathyavelu; Saddala, Rajeswara Reddy

    2018-03-01

    In order to understand the mechanisms underlying stress responses, meta-analysis of transcriptome is made to identify differentially expressed genes (DEGs) and their biological, molecular and cellular mechanisms in response to stressors. The present study is aimed at identifying the effect of abiotic and biotic stress factors, and it is found that several stress responsive genes are common for both abiotic and biotic stress factors in zebrafish. The meta-analysis of micro-array studies revealed that almost 4.7% i.e., 108 common DEGs are differentially regulated between abiotic and biotic stresses. This shows that there is a global coordination and fine-tuning of gene regulation in response to these two types of challenges. We also performed dimension reduction methods, principal component analysis, and partial least squares discriminant analysis which are able to segregate abiotic and biotic stresses into separate entities. The supervised machine learning model, recursive-support vector machine, could classify abiotic and biotic stresses with 100% accuracy using a subset of DEGs. Beside these methods, the random forests decision tree model classified five out of 8 stress conditions with high accuracy. Finally, Functional enrichment analysis revealed the different gene ontology terms, transcription factors and miRNAs factors in the regulation of stress responses. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  10. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  11. Roles of Vascular and Metabolic Components in Cognitive Dysfunction of Alzheimer disease: Short- and Long-term Modification by Non-genetic Risk Factors

    Directory of Open Access Journals (Sweden)

    Naoyuki eSato

    2013-11-01

    Full Text Available It is well known that a specific set of genetic and non-genetic risk factors contributes to the onset of Alzheimer disease (AD. Non-genetic risk factors include diabetes, hypertension in mid-life, and probably dyslipidemia in mid-life. This review focuses on the vascular and metabolic components of non-genetic risk factors. The mechanisms whereby non-genetic risk factors modify cognitive dysfunction are divided into four components, short- and long-term effects of vascular and metabolic factors. These consist of 1 compromised vascular reactivity, 2 vascular lesions, 3 hypo/hyperglycemia, and 4 exacerbated AD histopathological features, respectively. Vascular factors compromise cerebrovascular reactivity in response to neuronal activity and also cause irreversible vascular lesions. On the other hand, representative short-term effects of metabolic factors on cognitive dysfunction occur due to hypoglycemia or hyperglycemia. Non-genetic risk factors also modify the pathological manifestations of AD in the long-term. Therefore, vascular and metabolic factors contribute to aggravation of cognitive dysfunction in AD through short-term and long-term effects. Beta-amyloid could be involved in both vascular and metabolic components. It might be beneficial to support treatment in AD patients by appropriate therapeutic management of non-genetic risk factors, considering the contributions of these four elements to the manifestation of cognitive dysfunction in individual patients, though all components are not always present. It should be clarified how these four components interact with each other. To answer this question, a clinical prospective study that follows up clinical features with respect to these four components: 1 functional MRI or SPECT for cerebrovascular reactivity, 2 MRI for ischemic lesions and atrophy, 3 clinical episodes of hypoglycemia and hyperglycemia, 4 amyloid-PET and tau-PET for pathological features of AD, would be required.

  12. Roles of vascular and metabolic components in cognitive dysfunction of Alzheimer disease: short- and long-term modification by non-genetic risk factors.

    Science.gov (United States)

    Sato, Naoyuki; Morishita, Ryuichi

    2013-11-05

    It is well known that a specific set of genetic and non-genetic risk factors contributes to the onset of Alzheimer disease (AD). Non-genetic risk factors include diabetes, hypertension in mid-life, and probably dyslipidemia in mid-life. This review focuses on the vascular and metabolic components of non-genetic risk factors. The mechanisms whereby non-genetic risk factors modify cognitive dysfunction are divided into four components, short- and long-term effects of vascular and metabolic factors. These consist of (1) compromised vascular reactivity, (2) vascular lesions, (3) hypo/hyperglycemia, and (4) exacerbated AD histopathological features, respectively. Vascular factors compromise cerebrovascular reactivity in response to neuronal activity and also cause irreversible vascular lesions. On the other hand, representative short-term effects of metabolic factors on cognitive dysfunction occur due to hypoglycemia or hyperglycemia. Non-genetic risk factors also modify the pathological manifestations of AD in the long-term. Therefore, vascular and metabolic factors contribute to aggravation of cognitive dysfunction in AD through short-term and long-term effects. β-amyloid could be involved in both vascular and metabolic components. It might be beneficial to support treatment in AD patients by appropriate therapeutic management of non-genetic risk factors, considering the contributions of these four elements to the manifestation of cognitive dysfunction in individual patients, though all components are not always present. It should be clarified how these four components interact with each other. To answer this question, a clinical prospective study that follows up clinical features with respect to these four components: (1) functional MRI or SPECT for cerebrovascular reactivity, (2) MRI for ischemic lesions and atrophy, (3) clinical episodes of hypoglycemia and hyperglycemia, (4) amyloid-PET and tau-PET for pathological features of AD, would be required.

  13. Sensitivity Analysis on Elbow Piping Components in Seismically Isolated NPP under Seismic Loading

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Hee Kun; Hahm, Dae Gi; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Jeon, Bub Gyu; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    In this study, the FE model is verified using specimen test results and simulation with parameter variations are conducted. Effective parameters will randomly sampled and used as input values for simulations to be applied to the fragility analysis. pipelines are representative of them because they could undergo larger displacements when they are supported on both isolated and non-isolated structures simultaneously. Especially elbows are critical components of pipes under severed loading conditions such as earthquake action because strain is accumulated on them during the repeated bending of the pipe. Therefore, seismic performance of pipe elbow components should be examined thoroughly based on the fragility analysis. Fragility assessment of interface pipe should take different sources of uncertainty into account. However, selection of important sources and repeated tests with many random input values are very time consuming and expensive, so numerical analysis is commonly used. In the present study, finite element (FE) model of elbow component will be validated using the dynamic test results of elbow components. Using the verified model, sensitivity analysis will be implemented as a preliminary process of seismic fragility of piping system. Several important input parameters are selected and how the uncertainty of them are apportioned to the uncertainty of the elbow response is to be studied. Piping elbows are critical components under cyclic loading conditions as they are subjected large displacement. In a seismically isolated NPP, seismic capacity of piping system should be evaluated with caution. Seismic fragility assessment preliminarily needs parameter sensitivity analysis about the output of interest with different input parameter values.

  14. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  15. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  16. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  17. A factor analysis to find critical success factors in retail brand

    OpenAIRE

    Naser Azad; Seyed Foad Zarifi; Somayeh Hozouri

    2013-01-01

    The present exploratory study aims to find critical components of retail brand among some retail stores. The study seeks to build a brand name in retail level and looks to find important factors affecting it. Customer behavior is largely influenced when the first retail customer experience is formed. These factors have direct impacts on customer experience and satisfaction in retail industry. The proposed study performs an empirical investigation on two well-known retain stores located in cit...

  18. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    International Nuclear Information System (INIS)

    Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.

    2012-01-01

    We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  19. Applying independent component analysis to clinical fMRI at 7 T

    Directory of Open Access Journals (Sweden)

    Simon Daniel Robinson

    2013-09-01

    Full Text Available Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting and parallel imaging reconstruction errors. In this study, the ability of Independent Component Analysis (ICA to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activation with negligible contamination by motion effects. The results of General Linear Model (GLM analysis of these data were, in contrast, heavily contaminated by motion. Secondary motor areas, basal ganglia and thalamus involvement were apparent in ICA results, but there was low capability to isolate activation in the same brain regions in the GLM analysis, indicating that ICA was more sensitive as well as more specific. A method was developed to simplify the assessment of the large number of independent components. Task-related activation components could be automatically identified via intuitive and effective features. These findings demonstrate that ICA is a practical and sensitive analysis approach in high field fMRI studies, particularly where motion is evoked. Promising applications of ICA in clinical fMRI include presurgical planning and the study of pathologies affecting subcortical brain areas.

  20. Enantiomer-specific analysis of multi-component mixtures by correlated electron imaging-ion mass spectrometry

    NARCIS (Netherlands)

    Rafiee Fanood, M.M.; Ram, N.B.; Lehmann, C.S.; Powis, I.; Janssen, M.H.M.

    2015-01-01

    Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how

  1. 4. Nuclear power plant component failures

    International Nuclear Information System (INIS)

    1990-01-01

    Nuclear power plant component failures are dealt with in relation to reliability in nuclear power engineering. The topics treated include classification of failures, analysis of their causes and impacts, nuclear power plant failure data acquisition and processing, interdependent failures, and human factor reliability in nuclear power engineering. (P.A.). 8 figs., 7 tabs., 23 refs

  2. Identification of components of fibroadenoma in cytology preparations using texture analysis: a morphometric study.

    Science.gov (United States)

    Singh, S; Gupta, R

    2012-06-01

    To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.

  3. Characterization of virulence factor regulation by SrrAB, a two-component system in Staphylococcus aureus.

    Science.gov (United States)

    Pragman, Alexa A; Yarwood, Jeremy M; Tripp, Timothy J; Schlievert, Patrick M

    2004-04-01

    Workers in our laboratory have previously identified the staphylococcal respiratory response AB (SrrAB), a Staphylococcus aureus two-component system that acts in the global regulation of virulence factors. This system down-regulates production of agr RNAIII, protein A, and toxic shock syndrome toxin 1 (TSST-1), particularly under low-oxygen conditions. In this study we investigated the localization and membrane orientation of SrrA and SrrB, transcription of the srrAB operon, the DNA-binding properties of SrrA, and the effect of SrrAB expression on S. aureus virulence. We found that SrrA is localized to the S. aureus cytoplasm, while SrrB is localized to the membrane and is properly oriented to function as a histidine kinase. srrAB has one transcriptional start site which results in either an srrA transcript or a full-length srrAB transcript; srrB must be cotranscribed with srrA. Gel shift assays of the agr P2, agr P3, protein A (spa), TSST-1 (tst), and srr promoters revealed SrrA binding at each of these promoters. Analysis of SrrAB-overexpressing strains by using the rabbit model of bacterial endocarditis demonstrated that overexpression of SrrAB decreased the virulence of the organisms compared to the virulence of isogenic strains that do not overexpress SrrAB. We concluded that SrrAB is properly localized and oriented to function as a two-component system. Overexpression of SrrAB, which represses agr RNAIII, TSST-1, and protein A in vitro, decreases virulence in the rabbit endocarditis model. Repression of these virulence factors is likely due to a direct interaction between SrrA and the agr, tst, and spa promoters.

  4. Component analysis and initial validity of the exercise fear avoidance scale.

    Science.gov (United States)

    Wingo, Brooks C; Baskin, Monica; Ard, Jamy D; Evans, Retta; Roy, Jane; Vogtle, Laura; Grimley, Diane; Snyder, Scott

    2013-01-01

    To develop the Exercise Fear Avoidance Scale (EFAS) to measure fear of exercise-induced discomfort. We conducted principal component analysis to determine component structure and Cronbach's alpha to assess internal consistency of the EFAS. Relationships between EFAS scores, BMI, physical activity, and pain were analyzed using multivariate regression. The best fit was a 3-component structure: weight-specific fears, cardiorespiratory fears, and musculoskeletal fears. Cronbach's alpha for the EFAS was α=.86. EFAS scores significantly predicted BMI, physical activity, and PDI scores. Psychometric properties of this scale suggest it may be useful for tailoring exercise prescriptions to address fear of exercise-related discomfort.

  5. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  6. A multi-dimensional functional principal components analysis of EEG data.

    Science.gov (United States)

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  7. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  8. Cloud Masking for Remotely Sensed Data Using Spectral and Principal Components Analysis

    Directory of Open Access Journals (Sweden)

    A. Ahmad

    2012-06-01

    Full Text Available Two methods of cloud masking tuned to tropical conditions have been developed, based on spectral analysis and Principal Components Analysis (PCA of Moderate Resolution Imaging Spectroradiometer (MODIS data. In the spectral approach, thresholds were applied to four reflective bands (1, 2, 3, and 4, three thermal bands (29, 31 and 32, the band 2/band 1 ratio, and the difference between band 29 and 31 in order to detect clouds. The PCA approach applied a threshold to the first principal component derived from the seven quantities used for spectral analysis. Cloud detections were compared with the standard MODIS cloud mask, and their accuracy was assessed using reference images and geographical information on the study area.

  9. Factor analysis and Mokken scaling of the Organizational Commitment Questionnaire in nurses.

    Science.gov (United States)

    Al-Yami, M; Galdas, P; Watson, R

    2018-03-22

    To generate an Arabic version of the Organizational Commitment Questionnaire that would be easily understood by Arabic speakers and would be sensitive to Arabic culture. The nursing workforce in Saudi Arabia is undergoing a process of Saudization but there is a need to understand the factors that will help to retain this workforce. No organizational commitment tools exist in Arabic that are specifically designed for health organizations. An Arabic version of the organizational commitment tool could aid Arabic speaking employers to understand their employees' perceptions of their organizations. Translation and back-translation followed by factor analysis (principal components analysis and confirmatory factor analysis) to test the factorial validity and item response theory (Mokken scaling). A two-factor structure was obtained for the Organizational Commitment Questionnaire comprising Factor 1: Value commitment; and Factor 2: Commitment to stay with acceptable reliability measured by internal consistency. A Mokken scale was obtained including items from both factors showing a hierarchy of items running from commitment to the organization and commitment to self. This study shows that the Arabic version of the OCQ retained the established two-factor structure of the original English-language version. Although the two factors - 'value commitment' and 'commitment to stay' - repudiate the original developers' single factor claim. A useful insight into the structure of the Organizational Commitment Questionnaire has been obtained with the novel addition of a hierarchical scale. The Organizational Commitment Questionnaire is now ready to be used with nurses in the Arab speaking world and could be used a tool to measure the contemporary commitment of nursing employees and in future interventions aimed at increasing commitment and retention of valuable nursing staff. © 2018 International Council of Nurses.

  10. Structural analysis of NPP components and structures

    International Nuclear Information System (INIS)

    Saarenheimo, A.; Keinaenen, H.; Talja, H.

    1998-01-01

    Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)

  11. Determination of the usage factor of components after cyclic loading by means of high-resolution microstructural examination

    International Nuclear Information System (INIS)

    Seibold, A.; Scheibe, A.; Assmann, H.D.

    1991-01-01

    Materials subjected to cyclic loading experience a change in the microstructure which may affect the service life. Quantification of the microstructural changes and allocation of the microstructural condition to the corresponding point on the fatigue curve for the component material allows the usage factor to be derived. Taking the low-alloy, fine-grained structural steel 20MnMoNi55 (quenched and tempered structure) as an example, the relationship between microstructure and number of load cycles can be represented in the form of a reference curve. High-resolution examination allows the usage factor to be determined up to η=N/N f ≅0.5 under the given cyclic loading. Only a small specimen volume is required for examination using a transmission electron microscope. It can be taken from the component without affecting the required minimum wall thickness. The location of the specimen must, however, be stipulated, e.g. the location of highest loading according to calculations. When removing the specimen, care must be taken to ensure that the micro-structure is not affected. If these requirements are observed, high-resolution microstructural examination provides a method for checking the usage factor of a component. (orig.)

  12. Common cause failure data collection and analysis for safety-related components of TRIGA SSR-14MW Pitesti, Romania

    International Nuclear Information System (INIS)

    Radu, G.; Mladin, D.

    2003-01-01

    This paper presents a study performed on the set of common cause failures (CCF) of safety-related components of the research reactor TRIGA SSR-14 MW Pitesti. The data collected cover a period of 20 years, from 1979 to 2000. The sources of data are Shift Supervisor Reports, Work Authorizations, and Reactor Log Books. Events collected are analyzed by failure mode and degrees of failure. Qualitative analysis of root causes, coupling factors and corrective actions and quantitative analysis of CCF events are studied. The objective of this work is to develop qualitative insights in the nature of the reported events and to build a site-specific common cause events database. (author)

  13. Pursuing an ecological component for the Effect Factor in LCIA methods

    DEFF Research Database (Denmark)

    Cosme, Nuno Miguel Dias; Bjørn, Anders; Rosenbaum, Ralph K.

    have also been altered by past impacts. Model frameworks are usually built on stability, linearity of causality and expectation of a safe return to stable states if the stressor is minimised. However, the command-and-control paradigm has resulted in the erosion of natural resources and species...... EC50-based) or 1 (assuming that continuous stress affects reproduction rate), but these are all based on biological/physiological responses and do not add a true ecological component to the impact. Such factor simply changes the HC50 by 1 or 0.3 log units. A stressor with equal intensity in two...

  14. Organic Aerosol Component (OACOMP) Value-Added Product Report

    Energy Technology Data Exchange (ETDEWEB)

    Fast, J; Zhang, Q; Tilp, A; Shippert, T; Parworth, C; Mei, F

    2013-08-23

    Significantly improved returns in their aerosol chemistry data can be achieved via the development of a value-added product (VAP) of deriving OA components, called Organic Aerosol Components (OACOMP). OACOMP is primarily based on multivariate analysis of the measured organic mass spectral matrix. The key outputs of OACOMP are the concentration time series and the mass spectra of OA factors that are associated with distinct sources, formation and evolution processes, and physicochemical properties.

  15. Factor analysis of the Mayo-Portland Adaptability Inventory: structure and validity.

    Science.gov (United States)

    Bohac, D L; Malec, J F; Moessner, A M

    1997-07-01

    Principal-components (PC) factor analysis of the Mayo-Portland Adaptability Inventory (MPAI) was conducted using a sample of outpatients (n = 189) with acquired brain injury (ABI) to evaluate whether outcome after ABI is multifactorial or unifactorial in nature. An eight-factor model was derived which explained 64-4% of the total variance. The eight factors were interpreted as representing Activities of Daily Living, Social Initiation, Cognition, Impaired-Self-awareness/Distress, Social Skills/ Support, Independence, Visuoperceptual, and Psychiatric, respectively. Validation of the Cognition factor was supported when factor scores were correlated with various neuropsychological measures. In addition, 117 patient self-rating total scores were used to evaluate the Impaired Self-awareness/Distress factor. An inverse relationship was observed, supporting this factor's ability to capture the two-dimensional phenomena of diminished self-awareness or enhanced emotional distress. A new subscale structure is suggested, that may allow greater clinical utility in understanding how ABI manifests in patients, and may provide clinicians with a better structure for implementing treatment strategies to address specific areas of impairment and disability for specific patients. Additionally, more precise measurement of treatment outcomes may be afforded by this reorganization.

  16. Principal component analysis of FDG PET in amnestic MCI

    International Nuclear Information System (INIS)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco

    2008-01-01

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  17. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  18. Monitoring organic loading to swimming pools by fluorescence excitation–emission matrix with parallel factor analysis (PARAFAC)

    DEFF Research Database (Denmark)

    Seredynska-Sobecka, Bozena; Stedmon, Colin; Boe-Hansen, Rasmus

    2011-01-01

    Fluorescence Excitation–Emission Matrix spectroscopy combined with parallel factor analysis was employed to monitor water quality and organic contamination in swimming pools. The fluorescence signal of the swimming pool organic matter was low but increased slightly through the day. The analysis...... revealed that the organic matter fluorescence was characterised by five different components, one of which was unique to swimming pool organic matter and one which was specific to organic contamination. The latter component had emission peaks at 420nm and was found to be a sensitive indicator of organic...... loading in swimming pool water. The fluorescence at 420nm gradually increased during opening hours and represented material accumulating through the day....

  19. Analysis of factors that influencing the interest of Bali State Polytechnic’s students in entrepreneurship

    Science.gov (United States)

    Ayuni, N. W. D.; Sari, I. G. A. M. K. K.

    2018-01-01

    The high rate of unemployment results the economic growth to be hampered. To solve this situation, the government try to change the students’ mindset from becoming a job seeker to become a job creator or entrepreneur. One real action that usually been held in Bali State Polytechnic is Student Entrepreneurial Program. The purpose of this research is to identify and analyze the factors that influence the interest of Bali State Polytechnic’s Students in entrepreneurship, especially in the Entrepreneurial Student Program. Method used in this research is Factor Analysis including Bartlett Test, Kaiser-Mayer Olkin (KMO), Measure of Sampling Adequacy (MSA), factor extraction using Principal Component Analysis (PCA), factor selection using eigen value and scree plot, and factor rotation using orthogonal rotation varimax. Result shows that there are four factors that influencing the interest of Bali State Polytechnic’s Students in Entrepreneurship which are Contextual Factor (including Entrepreneurship Training, Academic Support, Perceived Confidence, and Economic Challenge), Self Efficacy Factor (including Leadership, Mental Maturity, Relation with Entrepreneur, and Authority), Subjective Norm Factor (including Support of Important Relative, Support of Friends, and Family Role), and Attitude Factor (including Self Realization).

  20. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  1. Fluorescence lifetime selectivity in excitation-emission matrices for qualitative analysis of a two-component system

    International Nuclear Information System (INIS)

    Millican, D.W.; McGown, L.B.

    1989-01-01

    Steady-state fluorescence excitation-emission matrices (EEMs), and phase-resolved EEMs (PREEMs) collected at modulation frequencies of 6, 18, and 30 MHz, were used for qualitative analysis of mixtures of benzo[k]fluoranthene (τ = 8 ns) and benzo[b]fluoranthene (τ = 29 ns) in ethanol. The EEMs of the individual components were extracted from mixture EEMs by means of wavelength component vector-gram (WCV) analysis. Phase resolution was found to be superior to steady-state measurements for extraction of the component spectra, for mixtures in which the intensity contributions from the two components are unequal

  2. Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis

    Science.gov (United States)

    Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi

    A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.

  3. Classification and analysis of factors that affect stability of oil and gas enterprise staff

    Directory of Open Access Journals (Sweden)

    Zelinska Haluna Olexiivna

    2016-12-01

    Full Text Available The relevance of human resources as a strategic goal of sustainable development of oil and gas companies is determined. It is shown that the stability of staff, as the main component of the social components of sustainable enterprise development, research and evaluation needs in terms of an integrated system of factors influence the behavior of staff. Addressing issues related to the management personnel can be based classification study the factors affecting its stability in the formation of high quality human resources strategy. In particular noted that the needs of each employee should become an integral part of the concept of work and life balance. Analysis of the results of the study showed that in areas of oil and gas industry has a number of factors that negatively affect its operation and development, which are caused not only technical, technological and natural factors, but also due to neglect behavioral characteristics personnel. It is found that without understanding of the behavioral characteristics of staff and its values can`t implement a quality model of human resource management and provide optimal scenarios of oil companies in general.

  4. Análisis del fracaso empresarial por sectores: factores diferenciadores = Cross-industry analysis of business failure: differential factors

    Directory of Open Access Journals (Sweden)

    María Jesús Mures Quintana

    2012-12-01

    Full Text Available El objetivo de este trabajo se centra en el análisis del fracaso empresarial por sectores, a fin de identificar los factores explicativos y predictivos de este fenómeno que son diferentes en tres de los principales sectores que se distinguen en toda economía: industria, construcción y servicios. Para cada uno de estos sectores, seguimos el mismo procedimiento. En primer lugar, aplicamos un análisis de componentes principales con el que identificamos los factores explicativos del fracaso empresarial en los tres sectores. A continuación, consideramos dichos factores como variables independientes en un análisis discriminante, que aplicamos para predecir el fracaso de una muestra de empresas, utilizando no sólo información financiera en forma de ratios, sino también otras variables no financieras relativas a las empresas, así como información externa a las mismas que refleja las condiciones macroeconómicas bajo las que desarrollan su actividad. This paper focuses on a cross-industry analysis of business failure, in order to identify the explanatory and predictor factors of this event that are different in three of the main industries in every economy: manufacturing, building and service. For each one of these industries, the same procedure is followed. First, a principal components analysis is applied in order to identify the explanatory factors of business failure in the three industries. Next, these factors are considered as independent variables in a discriminant analysis, so as to predict the firms’ failure, using not only financial information expressed by ratios, but also other non-financial variables related to the firms, as well as external information that reflects macroeconomic conditions under which they develop their activity.

  5. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  6. The analysis of multivariate group differences using common principal components

    NARCIS (Netherlands)

    Bechger, T.M.; Blanca, M.J.; Maris, G.

    2014-01-01

    Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences

  7. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  8. Assembly and activation of alternative complement components on endothelial cell-anchored ultra-large von Willebrand factor links complement and hemostasis-thrombosis.

    Directory of Open Access Journals (Sweden)

    Nancy A Turner

    Full Text Available Vascular endothelial cells (ECs express and release protein components of the complement pathways, as well as secreting and anchoring ultra-large von Willebrand factor (ULVWF multimers in long string-like structures that initiate platelet adhesion during hemostasis and thrombosis. The alternative complement pathway (AP is an important non-antibody-requiring host defense system. Thrombotic microangiopathies can be associated with defective regulation of the AP (atypical hemolytic-uremic syndrome or with inadequate cleavage by ADAMTS-13 of ULVWF multimeric strings secreted by/anchored to ECs (thrombotic thrombocytopenic purpura. Our goal was to determine if EC-anchored ULVWF strings caused the assembly and activation of AP components, thereby linking two essential defense mechanisms.We quantified gene expression of these complement components in cultured human umbilical vein endothelial cells (HUVECs by real-time PCR: C3 and C5; complement factor (CF B, CFD, CFP, CFH and CFI of the AP; and C4 of the classical and lectin (but not alternative complement pathways. We used fluorescent microscopy, monospecific antibodies against complement components, fluorescent secondary antibodies, and the analysis of >150 images to quantify the attachment of HUVEC-released complement proteins to ULVWF strings secreted by, and anchored to, the HUVECs (under conditions of ADAMTS-13 inhibition. We found that HUVEC-released C4 did not attach to ULVWF strings, ruling out activation of the classical and lectin pathways by the strings. In contrast, C3, FB, FD, FP and C5, FH and FI attached to ULVWF strings in quantitative patterns consistent with assembly of the AP components into active complexes. This was verified when non-functional FB blocked the formation of AP C3 convertase complexes (C3bBb on ULVWF strings.AP components are assembled and activated on EC-secreted/anchored ULVWF multimeric strings. Our findings provide one possible molecular mechanism for clinical

  9. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  10. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

    National Research Council Canada - National Science Library

    Qi, Yuan

    2000-01-01

    In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

  11. [Habitat factor analysis for Torreya grandis cv. Merrillii based on spatial information technology].

    Science.gov (United States)

    Wang, Xiao-ming; Wang, Ke; Ao, Wei-jiu; Deng, Jin-song; Han, Ning; Zhu, Xiao-yun

    2008-11-01

    Torreya grandis cv. Merrillii, a tertiary survival plant, is a rare tree species of significant economic value and expands rapidly in China. Its special habitat factor analysis has the potential value to provide guide information for its planting, management, and sustainable development, because the suitable growth conditions for this tree species are special and strict. In this paper, the special habitat factors for T. grandis cv. Merrillii in its core region, i.e., in seven villages of Zhuji City, Zhejiang Province were analyzed with Principal Component Analysis (PCA) and a series of data, such as IKONOS image, Digital Elevation Model (DEM), and field survey data supported by the spatial information technology. The results showed that T. grandis cv. Merrillii exhibited high selectivity of environmental factors such as elevation, slope, and aspect. 96.22% of T. grandis cv. Merrillii trees were located at the elevation from 300 to 600 m, 97.52% of them were found to present on the areas whose slope was less than 300, and 74.43% of them distributed on sunny and half-sunny slopes. The results of PCA analysis indicated that the main environmental factors affecting the habitat of T. grandis cv. Merrillii were moisture, heat, and soil nutrients, and moisture might be one of the most important ecological factors for T. grandis cv. Merrillii due to the unique biological and ecological characteristics of the tree species.

  12. The derivative assay--an analysis of two fast components of DNA rejoining kinetics

    International Nuclear Information System (INIS)

    Sandstroem, B.E.

    1989-01-01

    The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively

  13. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  14. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

    OpenAIRE

    Geroukis, Asterios; Brorson, Erik

    2014-01-01

    In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

  15. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  16. Probabilistic methods in nuclear power plant component ageing analysis

    International Nuclear Information System (INIS)

    Simola, K.

    1992-03-01

    The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience

  17. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  18. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  19. Factor Analysis of the Spanish Version of the WAIS: The Escala de Inteligencia Wechsler para Adultos (EIWA).

    Science.gov (United States)

    Gomez, Francisco C., Jr.; And Others

    1992-01-01

    The standardization of the Escala de Inteligencia Wechsler para Adultos (EIWA) and the original Wechsler Adult Intelligence Scale (WAIS) were subjected to principal components analysis to examine their comparability for 616 EIWA subjects and 800 WAIS subjects. Similarity of factor structures of both scales is supported. (SLD)

  20. Reformulating Component Identification as Document Analysis Problem

    NARCIS (Netherlands)

    Gross, H.G.; Lormans, M.; Zhou, J.

    2007-01-01

    One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can

  1. Thermogravimetric analysis of combustible waste components

    DEFF Research Database (Denmark)

    Munther, Anette; Wu, Hao; Glarborg, Peter

    In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...

  2. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....

  3. Identifying effective components of child maltreatment interventions: A meta-analysis

    NARCIS (Netherlands)

    van der Put, C.E.; Assink, M.; Gubbels, J.; Boekhout van Solinge, N.F.

    There is a lack of knowledge about specific components that make interventions effective in preventing or reducing child maltreatment. The aim of the present meta-analysis was to increase this knowledge by summarizing findings on effects of interventions for child maltreatment and by examining

  4. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  5. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  6. Principal component analysis of tomato genotypes based on some morphological and biochemical quality indicators

    Directory of Open Access Journals (Sweden)

    Glogovac Svetlana

    2012-01-01

    Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.

  7. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  8. Estimation of compound distribution in spectral images of tomatoes using independent component analysis

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.

    2003-01-01

    Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  9. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  10. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  11. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  12. Applications of the TVO piping and component analysis and monitoring system (PAMS)

    Energy Technology Data Exchange (ETDEWEB)

    Smeekes, P. (Teollisuuden Voima Oy, Olkiluoto (Finland)); Kuuluvainen, O. (Rostedt Oy, Luvia (Finland)); Torkkeli, E. (FEMdata Oy, Haukilahti (Finland))

    2010-05-15

    To make fitness, safety and lifetime related assessments for piping and components, the amount of data to be managed is getting larger and larger. At the same time it is essential that the data is reliable, up-to-date, well traceable and easy and fast to obtain. At present the main focus of PAMS is still on piping, but in the future the component related databases and applications will be more and more developed. This paper presents a piping and component database system, consisting of separate geometrical, material, loading, result and document databases as well as current and future applications of the system. By means of a user configurable interface program the user can generate indata files, run application programs and define what data to write back into the result database. The data in the result database can subsequently be used in new input files to perform postprocessing on previous results, for instance fatigue analysis. crack growth analysis or RI-ISI. The system is intended to facilitate the analyses of piping and components and generate well-documented appendices comprising significant parts of the input and output and the associated source references. (orig.)

  13. Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy

    Science.gov (United States)

    Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee

    2016-04-01

    Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.

  14. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  15. A review of the reliability analysis of LPRS including the components repairs

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.

    1983-01-01

    The reliability analysis of low pressure recirculation system in its long-term recicurlation phase before 24hs is presented. The possibility of repairing the components out of the containment is included. A general revision of analysis of the short-term recirculation phase is done. (author) [pt

  16. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  17. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    Science.gov (United States)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  18. A first application of independent component analysis to extracting structure from stock returns.

    Science.gov (United States)

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  19. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    Science.gov (United States)

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Exploratory factor analysis of the 12-item Functional Assessment of Chronic Illness Therapy-Spiritual Well-Being Scale in people newly diagnosed with advanced cancer.

    Science.gov (United States)

    Bai, Mei; Dixon, Jane K

    2014-01-01

    The purpose of this study was to reexamine the factor pattern of the 12-item Functional Assessment of Chronic Illness Therapy-Spiritual Well-Being Scale (FACIT-Sp-12) using exploratory factor analysis in people newly diagnosed with advanced cancer. Principal components analysis (PCA) and 3 common factor analysis methods were used to explore the factor pattern of the FACIT-Sp-12. Factorial validity was assessed in association with quality of life (QOL). Principal factor analysis (PFA), iterative PFA, and maximum likelihood suggested retrieving 3 factors: Peace, Meaning, and Faith. Both Peace and Meaning positively related to QOL, whereas only Peace uniquely contributed to QOL. This study supported the 3-factor model of the FACIT-Sp-12. Suggestions for revision of items and further validation of the identified factor pattern were provided.

  1. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Predictive factors of adherence to frequency and duration components in home exercise programs for neck and low back pain: an observational study

    Directory of Open Access Journals (Sweden)

    Jimeno-Serrano Francisco J

    2009-12-01

    Full Text Available Abstract Background Evidence suggests that to facilitate physical activity sedentary people may adhere to one component of exercise prescriptions (intensity, duration or frequency without adhering to other components. Some experts have provided evidence for determinants of adherence to different components among healthy people. However, our understanding remains scarce in this area for patients with neck or low back pain. The aims of this study are to determine whether patients with neck or low back pain have different rates of adherence to exercise components of frequency per week and duration per session when prescribed with a home exercise program, and to identify if adherence to both exercise components have distinct predictive factors. Methods A cohort of one hundred eighty-four patients with chronic neck or low back pain who attended physiotherapy in eight primary care centers were studied prospectively one month after intervention. The study had three measurement periods: at baseline (measuring characteristics of patients and pain, at the end of physiotherapy intervention (measuring characteristics of the home exercise program and a month later (measuring professional behaviors during clinical encounters, environmental factors and self-efficacy, and adherence behavior. Results Adherence to duration per session (70.9% ± 7.1 was more probable than adherence to frequency per week (60.7% ± 7.0. Self-efficacy was a relevant factor for both exercise components (p Conclusion We have shown in a clinic-based study that adherence to exercise prescription frequency and duration components have distinct levels and predictive factors. We recommend additional study, and advise that differential attention be given in clinical practice to each exercise component for improving adherence.

  3. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  4. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  5. Analysis of Moisture Content in Beetroot using Fourier Transform Infrared Spectroscopy and by Principal Component Analysis.

    Science.gov (United States)

    Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah

    2018-05-22

    The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.

  6. Radiosurgical treatment planning for intracranial AVM based on images generated by principal component analysis. A simulation study

    International Nuclear Information System (INIS)

    Kawaguchi, Osamu; Kunieda, Etsuo; Nyui, Yoshiyuki

    2009-01-01

    One of the most important factors in stereotactic radiosurgery (SRS) for intracranial arteriovenous malformation (AVM) is to determine accurate target delineation of the nidus. However, since intracranial AVMs are complicated in structure, it is often difficult to clearly determine the target delineation. The purpose of this study was to investigate the usefulness of principal component analysis (PCA) on intra-arterial contrast enhanced dynamic CT (IADCT) images as a tool for delineating accurate target volumes for stereotactic radiosurgery of AVMs. IADCT and intravenous contrast-enhanced CT (IVCT) were used to examine 4 randomly selected cases of AVM. PCA images were generated from the IADCT data. The first component images were considered feeding artery predominant, the second component images were considered draining vein predominant, and the third component images were considered background. Target delineations were first carried out from IVCT, and then again while referring to the first and second components of the PCA images. Dose calculation simulations for radiosurgical treatment plans with IVCT and PCA images were performed. Dose volume histograms of the vein areas as well as the target volumes were compared. In all cases, the calculated target volumes based on IVCT images were larger than those based on PCA images, and the irradiation doses for the vein areas were reduced. In this study, we simulated radiosurgical treatment planning for intracranial AVM based on PCA images. By using PCA images, the irradiation doses for the vein areas were substantially reduced. (author)

  7. Task-related component analysis for functional neuroimaging and application to near-infrared spectroscopy data.

    Science.gov (United States)

    Tanaka, Hirokazu; Katura, Takusige; Sato, Hiroki

    2013-01-01

    Reproducibility of experimental results lies at the heart of scientific disciplines. Here we propose a signal processing method that extracts task-related components by maximizing the reproducibility during task periods from neuroimaging data. Unlike hypothesis-driven methods such as general linear models, no specific time courses are presumed, and unlike data-driven approaches such as independent component analysis, no arbitrary interpretation of components is needed. Task-related components are constructed by a linear, weighted sum of multiple time courses, and its weights are optimized so as to maximize inter-block correlations (CorrMax) or covariances (CovMax). Our analysis method is referred to as task-related component analysis (TRCA). The covariance maximization is formulated as a Rayleigh-Ritz eigenvalue problem, and corresponding eigenvectors give candidates of task-related components. In addition, a systematic statistical test based on eigenvalues is proposed, so task-related and -unrelated components are classified objectively and automatically. The proposed test of statistical significance is found to be independent of the degree of autocorrelation in data if the task duration is sufficiently longer than the temporal scale of autocorrelation, so TRCA can be applied to data with autocorrelation without any modification. We demonstrate that simple extensions of TRCA can provide most distinctive signals for two tasks and can integrate multiple modalities of information to remove task-unrelated artifacts. TRCA was successfully applied to synthetic data as well as near-infrared spectroscopy (NIRS) data of finger tapping. There were two statistically significant task-related components; one was a hemodynamic response, and another was a piece-wise linear time course. In summary, we conclude that TRCA has a wide range of applications in multi-channel biophysical and behavioral measurements. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Assessing the effect of oil price on world food prices: Application of principal component analysis

    International Nuclear Information System (INIS)

    Esmaeili, Abdoulkarim; Shokoohi, Zainab

    2011-01-01

    The objective of this paper is to investigate the co-movement of food prices and the macroeconomic index, especially the oil price, by principal component analysis to further understand the influence of the macroeconomic index on food prices. We examined the food prices of seven major products: eggs, meat, milk, oilseeds, rice, sugar and wheat. The macroeconomic variables studied were crude oil prices, consumer price indexes, food production indexes and GDP around the world between 1961 and 2005. We use the Scree test and the proportion of variance method for determining the optimal number of common factors. The correlation coefficient between the extracted principal component and the macroeconomic index varies between 0.87 for the world GDP and 0.36 for the consumer price index. We find the food production index has the greatest influence on the macroeconomic index and that the oil price index has an influence on the food production index. Consequently, crude oil prices have an indirect effect on food prices. - Research Highlights: →We investigate the co-movement of food prices and the macroeconomic index. →The crude oil price has indirect effect on the world GDP via its impacts on food production index. →The food production index is the source of causation for CPI and GDP is affected by CPI. →The results confirm an indirect effect among oil price, food price principal component.

  9. Integrative sparse principal component analysis of gene expression data.

    Science.gov (United States)

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  10. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  11. Competition analysis on the operating system market using principal component analysis

    Directory of Open Access Journals (Sweden)

    Brătucu, G.

    2011-01-01

    Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.

  12. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1993-01-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  13. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  14. Characterization of CDOM from urban waters in Northern-Northeastern China using excitation-emission matrix fluorescence and parallel factor analysis.

    Science.gov (United States)

    Zhao, Ying; Song, Kaishan; Li, Sijia; Ma, Jianhang; Wen, Zhidan

    2016-08-01

    Chromophoric dissolved organic matter (CDOM) plays an important role in aquatic systems, but high concentrations of organic materials are considered pollutants. The fluorescent component characteristics of CDOM in urban waters sampled from Northern and Northeastern China were examined by excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC) to investigate the source and compositional changes of CDOM on both space and pollution levels. One humic-like (C1), one tryptophan-like component (C2), and one tyrosine-like component (C3) were identified by PARAFAC. Mean fluorescence intensities of the three CDOM components varied spatially and by pollution level in cities of Northern and Northeastern China during July-August, 2013 and 2014. Principal components analysis (PCA) was conducted to identify the relative distribution of all water samples. Cluster analysis (CA) was also used to categorize the samples into groups of similar pollution levels within a study area. Strong positive linear relationships were revealed between the CDOM absorption coefficients a(254) (R (2) = 0.89, p CDOM components can be applied to monitor water quality in real time compared to that of traditional approaches. These results demonstrate that EEM-PARAFAC is useful to evaluate the dynamics of CDOM fluorescent components in urban waters from Northern and Northeastern China and this method has potential applications for monitoring urban water quality in different regions with various hydrological conditions and pollution levels.

  15. Component fragilities - data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1986-01-01

    As part of the component fragility research program sponsored by the US Nuclear Regulatory Commission, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment, by identifying, collecting and analyzing existing test data from various sources. BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices of various manufacturers and models. Through a cooperative agreement, BNL has also obtained test data from EPRI/ANCO. An analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. An extensive amount of additional fragility or high level test data exists. If completely collected and properly analyzed, the entire data bank is expected to greatly reduce the need for additional testing to establish fragility levels for most equipment

  16. Component fragilities. Data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists

  17. Components of Program for Analysis of Spectra and Their Testing

    Directory of Open Access Journals (Sweden)

    Ivan Taufer

    2013-11-01

    Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.

  18. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  19. Advances in independent component analysis and learning machines

    CERN Document Server

    Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko

    2015-01-01

    In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t

  20. Risk factors for post-operative periprosthetic fractures following primary total hip arthroplasty with a proximally coated double-tapered cementless femoral component

    DEFF Research Database (Denmark)

    Gromov, K; Bersang, A; Nielsen, C S

    2017-01-01

    ratio were recorded post-operatively. Periprosthetic fractures were identified and classified according to the Vancouver classification. Regression analysis was performed to identify risk factors for early periprosthetic fracture. RESULTS: The mean follow-up was 713 days (1 to 2058). A total of 48......AIMS: The aim of this study was to identify patient- and surgery-related risk factors for sustaining an early periprosthetic fracture following primary total hip arthroplasty (THA) performed using a double-tapered cementless femoral component (Bi-Metric femoral stem; Biomet Inc., Warsaw, Indiana...... periprosthetic fractures (3.0%) were identified during the follow-up and median time until fracture was 16 days, (interquartile range 10 to 31.5). Patients with femoral Dorr type C had a 5.2 times increased risk of post-operative periprosthetic fracture compared with type B, while female patients had a near...

  1. A Factor Analysis of Trade Integration: The Case of Asian and Oceanic Economies

    OpenAIRE

    Yin-Wong Cheung; Matthew S. Yiu; Kenneth K. Chow

    2009-01-01

    We study trade integration among 15 selected Asian and Oceanic economies using factor models. The principal component approach is employed to extract the common factor that drives trade integration from bilateral trade integration series. It is found that the estimated common trade integration factor has strong seasonal and deterministic components. In accordance with theory, the common trade integration factor is significantly associated with the economic growth and the trade barriers of the...

  2. An Analysis of Testing Requirements for Fluoride Salt Cooled High Temperature Reactor Components

    Energy Technology Data Exchange (ETDEWEB)

    Holcomb, David Eugene [ORNL; Cetiner, Sacit M [ORNL; Flanagan, George F [ORNL; Peretz, Fred J [ORNL; Yoder Jr, Graydon L [ORNL

    2009-11-01

    This report provides guidance on the component testing necessary during the next phase of fluoride salt-cooled high temperature reactor (FHR) development. In particular, the report identifies and describes the reactor component performance and reliability requirements, provides an overview of what information is necessary to provide assurance that components will adequately achieve the requirements, and then provides guidance on how the required performance information can efficiently be obtained. The report includes a system description of a representative test scale FHR reactor. The reactor parameters presented in this report should only be considered as placeholder values until an FHR test scale reactor design is completed. The report focus is bounded at the interface between and the reactor primary coolant salt and the fuel and the gas supply and return to the Brayton cycle power conversion system. The analysis is limited to component level testing and does not address system level testing issues. Further, the report is oriented as a bottom-up testing requirements analysis as opposed to a having a top-down facility description focus.

  3. A totally automated data acquisition/reduction system for routine treatment of mass spectroscopic data by factor analysis

    International Nuclear Information System (INIS)

    Tway, P.C.; Love, L.J.C.; Woodruff, H.B.

    1980-01-01

    Target transformation factor analysis is applied to typical data from gas chromatography-mass spectrometry and solid-probe mass spectrometry to determine rapidly the number of components in unresolved or partially resolved peaks. This technique allows the detection of hidden impurities which often make interpretation or quantification impossible. The error theory of Malinowski is used to assess the reliability of the results. The totally automated system uses a commercially available g.c.-m.s. data system interfaced to the large computer, and the number of components under a peak can be determined routinely and rapidly. (Auth.)

  4. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  5. Analysis of failed nuclear plant components

    Science.gov (United States)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  6. On the structure of dynamic principal component analysis used in statistical process monitoring

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne

    2017-01-01

    When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...

  7. Analysis of Thermo-Mechanical Distortions in Sliding Components : An ALE Approach

    NARCIS (Netherlands)

    Owczarek, P.; Geijselaers, H.J.M.

    2008-01-01

    A numerical technique for analysis of heat transfer and thermal distortion in reciprocating sliding components is proposed. In this paper we utilize the Arbitrary Lagrangian Eulerian (ALE) description where the mesh displacement can be controlled independently from the material displacement. A

  8. Factors stimulating content marketing

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2016-02-01

    Full Text Available This paper presents an empirical investigation to determine factors influencing on content marketing in banking industry. The study designs a questionnaire consists of 40 questions in Likert scale and distributes it among 550 randomly selected regular customers of Bank Mellat in city of Tehran, Iran and 400 properly filled questionnaires are collected. Cronbach alphas for all components of the survey are well above desirable level. Using principle component analysis with Varimax rotation, the study has determined six factors influencing the most on content marketing including organization, details, having new ideas, quality, sensitivity and power while the last component contains only two subcomponents and is removed from the study.

  9. Process parameter optimization based on principal components analysis during machining of hardened steel

    Directory of Open Access Journals (Sweden)

    Suryakant B. Chandgude

    2015-09-01

    Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.

  10. Risk-informed importance analysis of in-service testing components for Ulchin units 3 and 4

    International Nuclear Information System (INIS)

    Kang, D. I.; Kim, K. Y.; Ha, J. J.

    2001-01-01

    In this paper, we perform risk-informed importance analysis of in-service tesing (IST) components for Ulchin Units 3 and 4. The importance analysis using PSA is performed through Level 1 internal and external, shutdown/low power operation, and Level 2 internal PSA. The sensitivity analysis is also performed. For the components not modeled in PSA logic, we develop and apply a new integrated importance analysis method. The importance analysis results for IST valves show that 167 (26.55%) of 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. The importance analysis results for IST pumps show that 28 (70%) of 40 IST pumps are HSSCs and 12 (30%) are KSSCs

  11. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  12. Fluoride in the Serra Geral Aquifer System: Source Evaluation Using Stable Isotopes and Principal Component Analysis

    OpenAIRE

    Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando

    2013-01-01

    Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...

  13. Factor Analysis on Criteria Affecting Lean Retrofit for Energy Efficient Initiatives in Higher Learning Institution Buildings

    Directory of Open Access Journals (Sweden)

    Abidin Nur IzieAdiana

    2017-01-01

    Full Text Available The expansion of Higher Learning Institution (HLI is a global concerns on energy demand due to campus act like a small city. Intensive mode of operation of a building is correlated to the energy utilization. Improvement in the current energy efficiency is crucial effort to minimize the environmental effect through minimisation of energy in operation by retrofitting and upgrade the existing building system or components to be more efficient. Basically, there are three recommended steps for the improvement known as lean initiatives, green technology and clean energy in response to becoming zero energy solutions for building. The deliberation of this paper is aimed to highlight the criteria affecting in retrofitting of existing building in HLI with lean initiatives in order to achieve energy efficiency and reduction of energy comsumption. Attention is devoted to reviewing the lean energy retrofitting initiatives criteria for daylighting (side lighting, daylighting (skylight and glazing. The questionnaire survey was employed and distributed to the architects who has an expertise in green building design. Factor analysis was adopted as a method of analysis by using Principal Component with Varimax Rotation. The result is presented through summarizing the sub-criteria according to its importance with a factor loading 0.50 and above. The result found that majority of the criteria developed achieved the significant factor loading value and in accordance with the protocal of analysis. In conclusion the results from analysis of this paper assists the stakeholders in assessing the significant criteria based on the desired lean energy retrofitting initiatives and also provides a huge contribution for future planning improvement in existing buildings to become an energy efficient building.

  14. Computer-aided stress analysis system for nuclear plant primary components

    International Nuclear Information System (INIS)

    Murai, Tsutomu; Tokumaru, Yoshio; Yamazaki, Junko.

    1980-06-01

    Generally it needs a vast quantity of calculation to make the stress analysis reports of nuclear plant primary components. In Japan, especially, stress analysis reports are under obligation to make for each plant. In Mitsubishi Heavy Industries, Ltd., We have been making great efforts to rationalize the process of analysis for about these ten years. As the result of rationalization up to now, a computer-aided stress analysis system using graphic display, graphic tablet, data file, etc. was accomplished and it needs us only the least hand work. In addition we developed a fracture safety analysis system. And we are going to develop the input generator system for 3-dimensional FEM analysis by graphics terminals in the near future. We expect that when the above-mentioned input generator system is accomplished, it will be possible for us to solve instantly any case of problem. (author)

  15. Left ventricular wall motion abnormalities evaluated by factor analysis as compared with Fourier analysis

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Ikuno, Yoshiyasu; Nishikimi, Toshio

    1986-01-01

    Factor analysis was applied to multigated cardiac pool scintigraphy to evaluate its ability to detect left ventricular wall motion abnormalities in 35 patients with old myocardial infarction (MI), and in 12 control cases with normal left ventriculography. All cases were also evaluated by conventional Fourier analysis. In most cases with normal left ventriculography, the ventricular and atrial factors were extracted by factor analysis. In cases with MI, the third factor was obtained in the left ventricle corresponding to wall motion abnormality. Each case was scored according to the coincidence of findings of ventriculography and those of factor analysis or Fourier analysis. Scores were recorded for three items; the existence, location, and degree of asynergy. In cases of MI, the detection rate of asynergy was 94 % by factor analysis, 83 % by Fourier analysis, and the agreement in respect to location was 71 % and 66 %, respectively. Factor analysis had higher scores than Fourier analysis, but this was not significant. The interobserver error of factor analysis was less than that of Fourier analysis. Factor analysis can display locations and dynamic motion curves of asynergy, and it is regarded as a useful method for detecting and evaluating left ventricular wall motion abnormalities. (author)

  16. Principle Component Analysis of two-particle correlations in PbPb and pPb collisions at CMS

    CERN Document Server

    AUTHOR|(CDS)2076725

    2015-01-01

    A Principle Component Analysis (PCA) of two-particle azimuthal correlations as a function of transverse momentum ($p_T$) is presented in PbPb collisions at 2.76 TeV and high-multiplicity pPb collisions at 5.02 TeV. The data were recorded using the CMS detector at the LHC. It was shown that factorization breaking of two-particle azimuthal correlations can be attributed to the effect of initial-state fluctuations. Using a PCA approach, Fourier coefficients of observed two-particle azimuthal correlations as a function of both particles $p_T$ are characterized into leading and sub-leading mode terms. The leading modes are essentially equivalent to anisotropy harmonics ($v_n$) previously extracted from two-particle correlation methods as a function of $p_T$. The sub-leading modes represent the largest sources of factorization breaking. In the context of hydrodynamic models, they are a direct consequence of initial-state fluctuations. The results are presented over a wide range of centrality and event multiplicity....

  17. Harmonic Stability Analysis of Offshore Wind Farm with Component Connection Method

    DEFF Research Database (Denmark)

    Hou, Peng; Ebrahimzadeh, Esmaeil; Wang, Xiongfei

    2017-01-01

    In this paper, an eigenvalue-based harmonic stability analysis method for offshore wind farm is proposed. Considering the internal cable connection layout, a component connection method (CCM) is adopted to divide the system into individual blocks as current controller of converters, LCL filters...

  18. Using Separable Nonnegative Matrix Factorization Techniques for the Analysis of Time-Resolved Raman Spectra

    Science.gov (United States)

    Luce, R.; Hildebrandt, P.; Kuhlmann, U.; Liesen, J.

    2016-09-01

    The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for non-negative matrix factorization which is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed.

  19. Nonlinear seismic analysis of a reactor structure with impact between core components

    International Nuclear Information System (INIS)

    Hill, R.G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-mass beam model of the FFTF which includes small clearances between core components is used as a ''driver'' for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed. 6 references

  20. F4E studies for the electromagnetic analysis of ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, P., E-mail: pietro.testoni@f4e.europa.eu [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Cau, F.; Portone, A. [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Albanese, R. [Associazione EURATOM/ENEA/CREATE, DIETI, Università Federico II di Napoli, Napoli (Italy); Juirao, J. [Numerical Analysis TEChnologies S.L. (NATEC), c/ Marqués de San Esteban, 52 Entlo D Gijón (Spain)

    2014-10-15

    Highlights: • Several ITER components have been analyzed from the electromagnetic point of view. • Categorization of DINA load cases is described. • VDEs, MDs and MFD have been studied. • Integral values of forces and moments components versus time have been computed for all the ITER components under study. - Abstract: Fusion for Energy (F4E) is involved in a relevant number of activities in the area of electromagnetic analysis in support of ITER general design and EU in-kind procurement. In particular several ITER components (vacuum vessel, blanket shield modules and first wall panels, test blanket modules, ICRH antenna) are being analyzed from the electromagnetic point of view. In this paper we give an updated description of our main activities, highlighting the main assumptions, objectives, results and conclusions. The plasma instabilities we consider, typically disruptions and VDEs, can be both toroidally symmetric and asymmetric. This implies that, depending on the specific component and loading conditions, FE models we use span from a sector of 10 up to 360° of the ITER machine. The techniques for simulating the electromagnetic phenomena involved in a disruption and the postprocessing of the results to obtain the loads acting on the structures are described. Finally we summarize the typical loads applied to different components and give a critical view of the results.

  1. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  2. Personality, tobacco consumption, physical inactivity, obesity markers, and metabolic components as risk factors for cardiovascular disease in the general population.

    Science.gov (United States)

    Pocnet, Cornelia; Antonietti, Jean-Philippe; Strippoli, Marie-Pierre F; Glaus, Jennifer; Rossier, Jérôme; Preisig, Martin

    2017-09-01

    The aim of this study was to investigate the relationship between personality traits, tobacco consumption, physical inactivity, obesity markers and metabolic components as cardiovascular risk factors (CVRFs). A total of 2543 participants from the general population (CoLaus|PsyCoLaus) had provided complete information on physical health and unhealthy behaviors and completed the Revised NEO Five-Factor Inventory. Our results show a strong cross-correlation between obesity markers and metabolic components suggesting that their combination could represent an important CVRF. Moreover, socio-demographic characteristics, tobacco consumption, and physical inactivity were associated with both obesity markers and metabolic components latent traits. The conscientiousness personality trait was significantly associated with obesity markers, but played a modest role. Indeed, higher conscientiousness was associated with lower level of obesity indicators. However, no link between personality and metabolic components were found. In sum, our data suggest that health related behaviours have more effect on the development of cardiovascular diseases than personality traits.

  3. Quality analysis of commercial samples of Ziziphi spinosae semen (suanzaoren by means of chromatographic fingerprinting assisted by principal component analysis

    Directory of Open Access Journals (Sweden)

    Shuai Sun

    2014-06-01

    Full Text Available Due to the scarcity of resources of Ziziphi spinosae semen (ZSS, many inferior goods and even adulterants are generally found in medicine markets. To strengthen the quality control, HPLC fingerprint common pattern established in this paper showed three main bioactive compounds in one chromatogram simultaneously. Principal component analysis based on DAD signals could discriminate adulterants and inferiorities. Principal component analysis indicated that all samples could be mainly regrouped into two main clusters according to the first principal component (PC1, redefined as Vicenin II and the second principal component (PC2, redefined as zizyphusine. PC1 and PC2 could explain 91.42% of the variance. Content of zizyphusine fluctuated more greatly than that of spinosin, and this result was also confirmed by the HPTLC result. Samples with low content of jujubosides and two common adulterants could not be used equivalently with authenticated ones in clinic, while one reference standard extract could substitute the crude drug in pharmaceutical production. Giving special consideration to the well-known bioactive saponins but with low response by end absorption, a fast and cheap HPTLC method for quality control of ZSS was developed and the result obtained was commensurate well with that of HPLC analysis. Samples having similar fingerprints to HPTLC common pattern targeting at saponins could be regarded as authenticated ones. This work provided a faster and cheaper way for quality control of ZSS and laid foundation for establishing a more effective quality control method for ZSS. Keywords: Adulterant, Common pattern, Principal component analysis, Quality control, Ziziphi spinosae semen

  4. Principal component analysis for neural electron/jet discrimination in highly segmented calorimeters

    International Nuclear Information System (INIS)

    Vassali, M.R.; Seixas, J.M.

    2001-01-01

    A neural electron/jet discriminator based on calorimetry is developed for the second-level trigger system of the ATLAS detector. As preprocessing of the calorimeter information, a principal component analysis is performed on each segment of the two sections (electromagnetic and hadronic) of the calorimeter system, in order to reduce significantly the dimension of the input data space and fully explore the detailed energy deposition profile, which is provided by the highly-segmented calorimeter system. It is shown that projecting calorimeter data onto 33 segmented principal components, the discrimination efficiency of the neural classifier reaches 98.9% for electrons (with only 1% of false alarm probability). Furthermore, restricting data projection onto only 9 components, an electron efficiency of 99.1% is achieved (with 3% of false alarm), which confirms that a fast triggering system may be designed using few components

  5. Exploring the Factor Structure of Neurocognitive Measures in Older Individuals

    Science.gov (United States)

    Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno

    2015-01-01

    Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732

  6. Motivational factors influencing the homeowners’ decisions between residential heating systems: An empirical analysis for Germany

    International Nuclear Information System (INIS)

    Michelsen, Carl Christian; Madlener, Reinhard

    2013-01-01

    Heating demand accounts for a large fraction of the overall energy demand of private households in Germany. A better understanding of the adoption and diffusion of energy-efficient and renewables-based residential heating systems (RHS) is of high policy relevance, particularly against the background of climate change, security of energy supply and increasing energy prices. In this paper, we explore the multi-dimensionality of the homeowners’ motivation to decide between competing RHS. A questionnaire survey (N=2440) conducted in 2010 among homeowners who had recently installed a RHS provides the empirical foundation. Principal component analysis shows that 25 items capturing different adoption motivations can be grouped around six dimensions: (1) cost aspects, (2) general attitude towards the RHS, (3) government grant, (4) reactions to external threats (i.e., environmental or energy supply security considerations), (5) comfort considerations, and (6) influence of peers. Moreover, a cluster analysis with the identified motivational factors as segmentation variables reveals three adopter types: (1) the convenience-oriented, (2) the consequences-aware, and (3) the multilaterally-motivated RHS adopter. Finally, we show that the influence of the motivational factors on the adoption decision also differs by certain characteristics of the homeowner and features of the home. - Highlights: ► Study of the multi-dimensionality of the motivation to adopt residential heating systems (RHS). ► Principal component and cluster analysis are applied to representative survey data for Germany. ► Motivation has six dimensions, including rational decision-making and emotional factors. ► Adoption motivation differs by certain characteristics of the homeowner and of the home. ► Many adopters are driven by existing habits and perceptions about the convenience of the RHS

  7. Study of displacement cascades in metals by means of component analysis

    International Nuclear Information System (INIS)

    Hou, M.

    1981-01-01

    Component analysis is used to study the spatial distributions of point defects resulting from collision cascades in solids. The components are the three (orthogonal) eigenvectors of the covariance matrix of the spatial distribution. Those corresponding to the extreme eigenvalues determine the directions maximizing and minimizing the variance of the spatial distribution. The intermediate one is the direction maximizing the variance of the distribution projected on a plane perpendicular to the principal component. The standard deviations of the distribution projected on the three components give a measure of its size. This measure is only dependent on the cascade structure. Vacancy and interstitial distributions generated in metals by the computer code MARLOWE based on the binary collision approximation are analysed and compared in this picture. The simulation of hundreds of cascades generated by projectiles in the keV energy range incident on polycrystalline gold makes it possible to collect information on their average spatial anisotropy, energy density and on the casade development. The dependence of characteristics on the energy and the masses involved is discussed. (orig.)

  8. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  9. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  10. Visualizing solvent mediated phase transformation behavior of carbamazepine polymorphs by principal component analysis

    DEFF Research Database (Denmark)

    Tian, Fang; Rades, Thomas; Sandler, Niklas

    2008-01-01

    The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...

  11. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  12. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    Science.gov (United States)

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  13. Cardiometabolic risk clustering in spinal cord injury: results of exploratory factor analysis.

    Science.gov (United States)

    Libin, Alexander; Tinsley, Emily A; Nash, Mark S; Mendez, Armando J; Burns, Patricia; Elrod, Matt; Hamm, Larry F; Groah, Suzanne L

    2013-01-01

    Evidence suggests an elevated prevalence of cardiometabolic risks among persons with spinal cord injury (SCI); however, the unique clustering of risk factors in this population has not been fully explored. The purpose of this study was to describe unique clustering of cardiometabolic risk factors differentiated by level of injury. One hundred twenty-one subjects (mean 37 ± 12 years; range, 18-73) with chronic C5 to T12 motor complete SCI were studied. Assessments included medical histories, anthropometrics and blood pressure, and fasting serum lipids, glucose, insulin, and hemoglobin A1c (HbA1c). The most common cardiometabolic risk factors were overweight/obesity, high levels of low-density lipoprotein (LDL-C), and low levels of high-density lipoprotein (HDL-C). Risk clustering was found in 76.9% of the population. Exploratory principal component factor analysis using varimax rotation revealed a 3-factor model in persons with paraplegia (65.4% variance) and a 4-factor solution in persons with tetraplegia (73.3% variance). The differences between groups were emphasized by the varied composition of the extracted factors: Lipid Profile A (total cholesterol [TC] and LDL-C), Body Mass-Hypertension Profile (body mass index [BMI], systolic blood pressure [SBP], and fasting insulin [FI]); Glycemic Profile (fasting glucose and HbA1c), and Lipid Profile B (TG and HDL-C). BMI and SBP formed a separate factor only in persons with tetraplegia. Although the majority of the population with SCI has risk clustering, the composition of the risk clusters may be dependent on level of injury, based on a factor analysis group comparison. This is clinically plausible and relevant as tetraplegics tend to be hypo- to normotensive and more sedentary, resulting in lower HDL-C and a greater propensity toward impaired carbohydrate metabolism.

  14. The Impact of Gas Turbine Component Leakage Fault on GPA Performance Diagnostics

    Directory of Open Access Journals (Sweden)

    E. L. Ntantis

    2016-01-01

    Full Text Available The leakage analysis is a key factor in determining energy loss from a gas turbine. Once the components assembly fails, air leakage through the opening increases resulting in a performance loss. Therefore, the performance efficiency of the engine cannot be reliably determined, without good estimates and analysis of leakage faults. Consequently, the implementation of a leakage fault within a gas turbine engine model is necessary for any performance diagnostic technique that can expand its diagnostics capabilities for more accurate predictions. This paper explores the impact of gas turbine component leakage fault on GPA (Gas Path Analysis Performance Diagnostics. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different component fault cases. Conclusively, to improve the reliability of the diagnostic results, a leakage fault analysis of the implemented faults is made. The diagnostic tool used to deal with the analysis of the gas turbine component implemented faults is a model-based method utilizing a non-linear GPA.

  15. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting model equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method. The vectors of maximum system displacements as computed by the time history analysis and the five response spectrum analysis methods are presented. (Auth.)

  16. Nonlinear analysis of LWR components: areas of investigation/benefits/recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, S. J. [ed.

    1980-04-01

    The purpose of this study is to identify specific topics of investigation into design procedures, design concepts, methods of analysis, testing practices, and standards which are characterized by nonlinear behavior (both geometric and material) and which are considered to offer some economic and/or technical benefits to the LWR industry (excluding piping). In this study these topics were collected, compiled, and subjectively evaluated as to their potential benefit. The topics considered to have the greatest benefit/impact potential are discussed. The topics of investigation were found to fall basically into three areas: component, code interpretation, and load/failure mechanism. The topics are arbitrarily reorganized into six areas of investigation: Fracture, Fatigue, Vibration/Dynamic/Seismic, Plasticity, Component/Computational Considerations, and Code Interpretation.

  17. Building Block Approach' for Structural Analysis of Thermoplastic Composite Components for Automotive Applications

    Science.gov (United States)

    Carello, M.; Amirth, N.; Airale, A. G.; Monti, M.; Romeo, A.

    2017-12-01

    Advanced thermoplastic prepreg composite materials stand out with regard to their ability to allow complex designs with high specific strength and stiffness. This makes them an excellent choice for lightweight automotive components to reduce mass and increase fuel efficiency, while maintaining the functionality of traditional thermosetting prepreg (and mechanical characteristics) and with a production cycle time and recyclability suited to mass production manufacturing. Currently, the aerospace and automotive sectors struggle to carry out accurate Finite Elements (FE) component analyses and in some cases are unable to validate the obtained results. In this study, structural Finite Elements Analysis (FEA) has been done on a thermoplastic fiber reinforced component designed and manufactured through an integrated injection molding process, which consists in thermoforming the prepreg laminate and overmolding the other parts. This process is usually referred to as hybrid molding, and has the provision to reinforce the zones subjected to additional stresses with thermoformed themoplastic prepreg as required and overmolded with a shortfiber thermoplastic resin in single process. This paper aims to establish an accurate predictive model on a rational basis and an innovative methodology for the structural analysis of thermoplastic composite components by comparison with the experimental tests results.

  18. Other components

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    This chapter includes descriptions of electronic and mechanical components which do not merit a chapter to themselves. Other hardware requires mention because of particularly high tolerance or intolerance of exposure to radiation. A more systematic analysis of radiation responses of structures which are definable by material was given in section 3.8. The components discussed here are field effect transistors, transducers, temperature sensors, magnetic components, superconductors, mechanical sensors, and miscellaneous electronic components

  19. Design, Analysis and R&D of the EAST In-Vessel Components

    Science.gov (United States)

    Yao, Damao; Bao, Liman; Li, Jiangang; Song, Yuntao; Chen, Wenge; Du, Shijun; Hu, Qingsheng; Wei, Jing; Xie, Han; Liu, Xufeng; Cao, Lei; Zhou, Zibo; Chen, Junling; Mao, Xinqiao; Wang, Shengming; Zhu, Ning; Weng, Peide; Wan, Yuanxi

    2008-06-01

    In-vessel components are important parts of the EAST superconducting tokamak. They include the plasma facing components, passive plates, cryo-pumps, in-vessel coils, etc. The structural design, analysis and related R&D have been completed. The divertor is designed in an up-down symmetric configuration to accommodate both double null and single null plasma operation. Passive plates are used for plasma movement control. In-vessel coils are used for the active control of plasma vertical movements. Each cryo-pump can provide an approximately 45 m3/s pumping rate at a pressure of 10-1 Pa for particle exhaust. Analysis shows that, when a plasma current of 1 MA disrupts in 3 ms, the EM loads caused by the eddy current and the halo current in a vertical displacement event (VDE) will not generate an unacceptable stress on the divertor structure. The bolted divertor thermal structure with an active cooling system can sustain a load of 2 MW/m2 up to a 60 s operation if the plasma facing surface temperature is limited to 1500 °C. Thermal testing and structural optimization testing were conducted to demonstrate the analysis results.

  20. Comparison of cluster and principal component analysis techniques to derive dietary patterns in Irish adults.

    Science.gov (United States)

    Hearty, Aine P; Gibney, Michael J

    2009-02-01

    The aims of the present study were to examine and compare dietary patterns in adults using cluster and factor analyses and to examine the format of the dietary variables on the pattern solutions (i.e. expressed as grams/day (g/d) of each food group or as the percentage contribution to total energy intake). Food intake data were derived from the North/South Ireland Food Consumption Survey 1997-9, which was a randomised cross-sectional study of 7 d recorded food and nutrient intakes of a representative sample of 1379 Irish adults aged 18-64 years. Cluster analysis was performed using the k-means algorithm and principal component analysis (PCA) was used to extract dietary factors. Food data were reduced to thirty-three food groups. For cluster analysis, the most suitable format of the food-group variable was found to be the percentage contribution to energy intake, which produced six clusters: 'Traditional Irish'; 'Continental'; 'Unhealthy foods'; 'Light-meal foods & low-fat milk'; 'Healthy foods'; 'Wholemeal bread & desserts'. For PCA, food groups in the format of g/d were found to be the most suitable format, and this revealed four dietary patterns: 'Unhealthy foods & high alcohol'; 'Traditional Irish'; 'Healthy foods'; 'Sweet convenience foods & low alcohol'. In summary, cluster and PCA identified similar dietary patterns when presented with the same dataset. However, the two dietary pattern methods required a different format of the food-group variable, and the most appropriate format of the input variable should be considered in future studies.

  1. The Factor Structure in Equity Options

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Fournier, Mathieu; Jacobs, Kris

    Principal component analysis of equity options on Dow-Jones firms reveals a strong factor structure. The first principal component explains 77% of the variation in the equity volatility level, 77% of the variation in the equity option skew, and 60% of the implied volatility term structure across...... equities. Furthermore, the first principal component has a 92% correlation with S&P500 index option volatility, a 64% correlation with the index option skew, and a 80% correlation with the index option term structure. We develop an equity option valuation model that captures this factor structure...

  2. Principle Component Analysis of two-particle correlations in PbPb and pPb collisions at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Milosevic, Jovan, E-mail: Jovan.Milosevic@cern.ch

    2016-12-15

    A Principle Component Analysis (PCA) of two-particle azimuthal correlations as a function of transverse momentum (p{sub T}) is presented in PbPb collisions at 2.76 TeV and high-multiplicity pPb collisions at 5.02 TeV. The data were recorded using the CMS detector at the LHC. It was shown that factorization breaking of two-particle azimuthal correlations can be attributed to the effect of initial-state fluctuations. Using a PCA approach, Fourier coefficients of observed two-particle azimuthal correlations as a function of both particles' p{sub T} are characterized into leading and sub-leading mode terms. The leading modes are essentially equivalent to anisotropy harmonics (v{sub n}) previously extracted from two-particle correlation methods as a function of p{sub T}. The sub-leading modes represent the largest sources of factorization breaking. In the context of hydrodynamic models, they are a direct consequence of initial-state fluctuations. The results are presented over a wide range of centrality and event multiplicity. The results are connected to the measurement of p{sub T}-dependent flow factorization breaking.

  3. Efficient real time OD matrix estimation based on principal component analysis

    NARCIS (Netherlands)

    Djukic, T.; Flötteröd, G.; Van Lint, H.; Hoogendoorn, S.P.

    2012-01-01

    In this paper we explore the idea of dimensionality reduction and approximation of OD demand based on principal component analysis (PCA). First, we show how we can apply PCA to linearly transform the high dimensional OD matrices into the lower dimensional space without significant loss of accuracy.

  4. Symmetrical components and power analysis for a two-phase microgrid system

    DEFF Research Database (Denmark)

    Alibeik, M.; Santos Jr., E. C. dos; Blaabjerg, Frede

    2014-01-01

    This paper presents a mathematical model for the symmetrical components and power analysis of a new microgrid system consisting of three wires and two voltages in quadrature, which is designated as a two-phase microgrid. The two-phase microgrid presents the following advantages: 1) constant power...

  5. Failure analysis a practical guide for manufacturers of electronic components and systems

    CERN Document Server

    Bâzu, Marius

    2011-01-01

    Failure analysis is the preferred method to investigate product or process reliability and to ensure optimum performance of electrical components and systems. The physics-of-failure approach is the only internationally accepted solution for continuously improving the reliability of materials, devices and processes. The models have been developed from the physical and chemical phenomena that are responsible for degradation or failure of electronic components and materials and now replace popular distribution models for failure mechanisms such as Weibull or lognormal. Reliability engineers nee

  6. Identification of Counterfeit Alcoholic Beverages Using Cluster Analysis in Principal-Component Space

    Science.gov (United States)

    Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.

    2017-07-01

    A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.

  7. Using principal component analysis and annual seasonal trend analysis to assess karst rocky desertification in southwestern China.

    Science.gov (United States)

    Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua

    2017-06-01

    Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3  km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.

  8. Fast grasping of unknown objects using principal component analysis

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  9. Analysis of algae growth mechanism and water bloom prediction under the effect of multi-affecting factor.

    Science.gov (United States)

    Wang, Li; Wang, Xiaoyi; Jin, Xuebo; Xu, Jiping; Zhang, Huiyan; Yu, Jiabin; Sun, Qian; Gao, Chong; Wang, Lingbin

    2017-03-01

    The formation process of algae is described inaccurately and water blooms are predicted with a low precision by current methods. In this paper, chemical mechanism of algae growth is analyzed, and a correlation analysis of chlorophyll-a and algal density is conducted by chemical measurement. Taking into account the influence of multi-factors on algae growth and water blooms, the comprehensive prediction method combined with multivariate time series and intelligent model is put forward in this paper. Firstly, through the process of photosynthesis, the main factors that affect the reproduction of the algae are analyzed. A compensation prediction method of multivariate time series analysis based on neural network and Support Vector Machine has been put forward which is combined with Kernel Principal Component Analysis to deal with dimension reduction of the influence factors of blooms. Then, Genetic Algorithm is applied to improve the generalization ability of the BP network and Least Squares Support Vector Machine. Experimental results show that this method could better compensate the prediction model of multivariate time series analysis which is an effective way to improve the description accuracy of algae growth and prediction precision of water blooms.

  10. Evidence of aging effects on certain safety-related components: summary and analysis

    International Nuclear Information System (INIS)

    1995-09-01

    In response to interest shown by the Nuclear Energy Agency (NEA), Principal Working Group I (PWG- 1) of the Committee on the Safety of Nuclear Installations (CSNI) conducted a generic study on the effects of aging of active components in nuclear power plants. Representatives from France, Sweden, Finland, Japan, the United States, and the United Kingdom participated in the study by submitting reports documenting aging studies performed in their countries. This report consists of summaries of those reports, along with a comparison of the various statistical analysis methods used in the studies. The studies indicate that with some exceptions, active components generally do not present a significant aging problem in nuclear power plants. Design criteria and effective preventative maintenance programs, including timely replacement of components, are effective in mitigating potential aging problems. However, aging studies (such as qualitative and statistical analyses of failure modes and maintenance data) are an important part of efforts to identify and solve potential aging problems. Solving these problems typically includes such strategies as replacing suspect components with improved components, and implementing improved maintenance programs

  11. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    Science.gov (United States)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  12. Principal Component Analysis to Explore Climatic Variability and Dengue Outbreak in Lahore

    Directory of Open Access Journals (Sweden)

    Syed Afrozuddin Ahmed

    2014-08-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan.  In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. Principal component analysis is performed for the purpose of finding if there is/are any general environmental factor/structure which could be affected in the emergence of dengue fever cases in Pakistani climate. Principal component is applied to find structure in data for all four periods i.e. 1980 to 2012, 1980 to 1995 and 1996 to 2012.  The first three PCs for the period (1980-2012, 1980-1994, 1995-2012 are almost the same and it represent hot and windy weather. The PC1s of all dengue periods are different to each other. PC2 for all period are same and it is wetness in weather. PC3s are different and it is the combination of wetness and windy weather. PC4s for all period show humid but no rain in weather. For climatic variable only minimum temperature and maximum temperature are significantly correlated with daily dengue cases.  PC1, PC3 and PC4 are highly significantly correlated with daily dengue cases 

  13. Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  14. Efficient algorithms to assess component and gate importance in fault tree analysis

    International Nuclear Information System (INIS)

    Dutuit, Y.; Rauzy, A.

    2001-01-01

    One of the principal activities of risk assessment is either the ranking or the categorization of structures, systems and components with respect to their risk-significance or their safety-significance. Several measures, so-called importance factors, of such a significance have been proposed for the case where the support model is a fault tree. In this article, we show how binary decision diagrams can be use to assess efficiently a number of classical importance factors. This work completes the preliminary results obtained recently by Andrews and Sinnamon, and the authors. It deals also with the concept of joint reliability importance

  15. Principal component analysis of biometric traits to reveal body confirmation in local hill cattle of Himalayan state of Himachal Pradesh, India.

    Science.gov (United States)

    Verma, Deepak; Sankhyan, Varun; Katoch, Sanjeet; Thakur, Yash Pal

    2015-12-01

    In the present study, biometric traits (body length [BL], heart girth [HG], paunch girth (PG), forelimb length (FLL), hind limb length (HLL), face length, forehead width, forehead length, height at hump, hump length (HL), hook to hook distance, pin to pin distance, tail length (TL), TL up to switch, horn length, horn circumference, and ear length were studied in 218 adult hill cattle of Himachal Pradesh for phenotypic characterization. Morphological and biometrical observations were recorded on 218 hill cattle randomly selected from different districts within the breeding tract. Multivariate statistics and principal component analysis are used to account for the maximum portion of variation present in the original set of variables with a minimum number of composite variables through Statistical software, SAS 9.2. Five components were extracted which accounted for 65.9% of variance. The first component explained general body confirmation and explained 34.7% variation. It was represented by significant loading for BL, HG, PG, FLL, and HLL. Communality estimate ranged from 0.41 (HL) to 0.88 (TL). Second, third, fourth, and fifth component had a high loading for tail characteristics, horn characteristics, facial biometrics, and rear body, respectively. The result of component analysis of biometric traits suggested that indigenous hill cattle of Himachal Pradesh are small and compact size cattle with a medium hump, horizontally placed short ears, and a long tail. The study also revealed that factors extracted from the present investigation could be used in breeding programs with sufficient reduction in the number of biometric traits to be recorded to explain the body confirmation.

  16. Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation

    Directory of Open Access Journals (Sweden)

    Chunbo Yang

    2015-01-01

    Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.

  17. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  18. Determination of the usage factor of components after cyclic loading using high-resolution microstructural investigations

    International Nuclear Information System (INIS)

    Seibold, A.; Scheibe, A.; Assmann, H.D.

    1989-01-01

    The usage factor can be derived from the quantification of the structure changes and the allocation of the microstructural state to the fatigue curves of the component materials. Using the example of the low alloy fine grain structural steel 20 Mn Mo Ni 5 5 (annealed structure), the relationship between micro-structure and the number of load cycles is shown in the form of a calibration curve. By high resolution structural investigation, the usage factor can be determined to n = N/N B ≅ 0.5 under given vibration stress. Only a small volume sample is required for the electron microscope examination. (orig./DG) [de

  19. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    Science.gov (United States)

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach

  20. Principal component analysis of NEXAFS spectra for molybdenum speciation in hydrotreating catalysts

    International Nuclear Information System (INIS)

    Faro Junior, Arnaldo da C.; Rodrigues, Victor de O.; Eon, Jean-G.; Rocha, Angela S.

    2010-01-01

    Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L III and L II edges. The techniques of principal component analysis (PCA) together with a linear combination analysis (LCA) allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination. (author)