WorldWideScience

Sample records for component analysis model

  1. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  2. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  3. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  6. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  7. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  8. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  9. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  10. A Bayesian Analysis of Unobserved Component Models Using Ox

    Directory of Open Access Journals (Sweden)

    Charles S. Bos

    2011-05-01

    Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.

  11. Five-component propagation model for steam explosion analysis

    International Nuclear Information System (INIS)

    Yang, Y.; Moriyama, Kiyofumi; Park, H.S.; Maruyama, Yu; Sugimoto, Jun

    1999-01-01

    A five-field simulation code JASMINE-pro has been developed at JAERI for the calculation of the propagation and explosion phase of steam explosions. The basic equations and the constitutive relationships specifically utilized in the propagation models in the code are introduced in this paper. Some calculations simulating the KROTOS 1D and 2D steam explosion experiments are also stated in the paper to show the present capability of the code. (author)

  12. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  13. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  14. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  15. Individual differences in anxiety responses to stressful situations : A three-mode component analysis model

    NARCIS (Netherlands)

    Van Mechelen, Iven; Kiers, Henk A.L.

    1999-01-01

    The three-mode component analysis model is discussed as a tool for a contextualized study of personality. When applied to person x situation x response data, the model includes sets of latent dimensions for persons, situations, and responses as well as a so-called core array, which may be considered

  16. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  17. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  18. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  19. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  20. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  1. Equivalent water height extracted from GRACE gravity field model with robust independent component analysis

    Science.gov (United States)

    Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei

    2014-08-01

    The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.

  2. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  3. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    OpenAIRE

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  4. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  6. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  7. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  8. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  9. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  10. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  11. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  12. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  13. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  14. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A D

    2012-01-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T 2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  15. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    Science.gov (United States)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  16. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2014-01-01

    Full Text Available As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  17. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  18. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  19. Shifted Independent Component Analysis

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2007-01-01

    Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carried...

  20. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    Science.gov (United States)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion

  1. Implementation of wall film condensation model to two-fluid model in component thermal hydraulic analysis code CUPID - 15237

    International Nuclear Information System (INIS)

    Lee, J.H.; Park, G.C.; Cho, H.K.

    2015-01-01

    In the containment of a nuclear reactor, the wall condensation occurs when containment cooling system and structures remove the mass and energy release and this phenomenon is of great importance to ensure containment integrity. If the phenomenon occurs in the presence of non-condensable gases, their accumulation near the condensate film leads to significant reduction in heat transfer during the condensation. This study aims at simulating the wall film condensation in the presence of non-condensable gas using CUPID, a computational multi-fluid dynamics code, which is developed by the Korea Atomic Energy Research Institute (KAERI) for the analysis of transient two-phase flows in nuclear reactor components. In order to simulate the wall film condensation in containment, the code requires a proper wall condensation model and liquid film model applicable to the analysis of the large scale system. In the present study, the liquid film model and wall film condensation model were implemented in the two-fluid model of CUPID. For the condensation simulation, a wall function approach with heat and mass transfer analogy was applied in order to save computational time without considerable refinement for the boundary layer. This paper presents the implemented wall film condensation model and then, introduces the simulation result using CUPID with the model for a conceptual condensation problem in a large system. (authors)

  2. Analysis of detached recombining plasmas by collisonal-radiative model with energetic electron component

    International Nuclear Information System (INIS)

    Ohno, N.; Motoyama, M.; Takamura, S.

    2001-01-01

    using CR model for a helium plasma (Goto-Fujimoto code), in which the energetic electron component (electron beam) is taken into account in addition to the bulk electron Maxwellian distribution function. It is found that the evaluated bulk electron temperature with the method of Boltzmann plot tends to decrease with an increase in the electron beam density and/or energy because the population densities in relatively lower excited states become large, comparing with those in higher excited state. This result agrees with the experimental observations. We have also analyzed transition of recombining plasma to ionizing one and vice versa in detail. This analysis can reproduce the inverse ELM phenomena observed in JET and ASDEX-U. (orig.)

  3. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  4. A Bayesian analysis of the PPP puzzle using an unobserved components model

    NARCIS (Netherlands)

    R.H. Kleijn (Richard); H.K. van Dijk (Herman)

    2001-01-01

    textabstractThe failure to describe the time series behaviour of most real exchange rates as temporary deviations from fixed long-term means may be due to time variation of the equilibria themselves, see Engel (2000). We implement this idea using an unobserved components model and decompose the

  5. Analysis and Modeling of the Galvanic Skin Response Spontaneous Component in the context of Intelligent Biofeedback Systems Development

    Science.gov (United States)

    Unakafov, A.

    2009-01-01

    The paper presents an approach to galvanic skin response (GSR) spontaneous component analysis and modeling. In the study a classification of biofeedback training methods is given, importance of intelligent methods development is shown. The INTENS method, which is perspective for intellectualization, is presented. An important problem of biofeedback training method intellectualization - estimation of the GSR spontaneous component - is solved in the main part of the work. Its main characteristics are described; results of GSR spontaneous component modeling are shown. Results of small research of an optimum material for GSR probes are presented.

  6. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  7. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    Science.gov (United States)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  8. Independent component analysis: recent advances

    OpenAIRE

    Hyv?rinen, Aapo

    2013-01-01

    Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...

  9. Estimation of Properties of Pure Components Using Improved Group-Contribution+ (GC+) Based Models and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    and uncertainty analysis, in general, is developed and used. In total 21 properties of pure components, which include normal boiling point, critical constants, normal melting point among others have been analysed. The statistical analysis of the model performance for these properties is highlighted through...... several illustrative examples. Important issues related to property modeling such as thermodynamic consistency of the predicted properties (relation of normal boiling point versus critical temperature etc.) are analysed. The developed methodology is simple, yet sound and effective and provides not only...

  10. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    Science.gov (United States)

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  11. Component Reification in Systems Modelling

    DEFF Research Database (Denmark)

    Bendisposto, Jens; Hallerstede, Stefan

    When modelling concurrent or distributed systems in Event-B, we often obtain models where the structure of the connected components is specified by constants. Their behaviour is specified by the non-deterministic choice of event parameters for events that operate on shared variables. From a certain......? These components may still refer to shared variables. Events of these components should not refer to the constants specifying the structure. The non-deterministic choice between these components should not be via parameters. We say the components are reified. We need to address how the reified components get...... reflected into the original model. This reflection should indicate the constraints on how to connect the components....

  12. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  13. Generalized principal component analysis

    CERN Document Server

    Vidal, René; Sastry, S S

    2016-01-01

    This book provides a comprehensive introduction to the latest advances in the mathematical theory and computational tools for modeling high-dimensional data drawn from one or multiple low-dimensional subspaces (or manifolds) and potentially corrupted by noise, gross errors, or outliers. This challenging task requires the development of new algebraic, geometric, statistical, and computational methods for efficient and robust estimation and segmentation of one or multiple subspaces. The book also presents interesting real-world applications of these new methods in image processing, image and video segmentation, face recognition and clustering, and hybrid system identification etc. This book is intended to serve as a textbook for graduate students and beginning researchers in data science, machine learning, computer vision, image and signal processing, and systems theory. It contains ample illustrations, examples, and exercises and is made largely self-contained with three Appendices which survey basic concepts ...

  14. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  15. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Pt. II. Analysis of ITER plasma facing components

    International Nuclear Information System (INIS)

    Federici, G.; Raffray, A.R.

    1997-01-01

    For pt.I see ibid., p.85-100, 1997. The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the various ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness. (orig.)

  16. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part II: Analysis of ITER plasma facing components

    Science.gov (United States)

    Federici, Gianfranco; Raffray, A. René

    1997-04-01

    The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.

  17. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    dispersion of the uncertain parameters. If there is no uncertainty, a single, sharp value of the component lifetime is predicted, corresponding to the limit β=∞. In contrast, the shock-loading model is inherently random, and its predictions correspond closely to those of a constant hazard rate model, characterized by a value of β close to 1 for all finite degrees of parameter uncertainty. The results are discussed in the context of traditional methods for reliability analysis and conventional views on the nature of early-life failures

  18. Use of principal components analysis and three-dimensional atmospheric-transport models for reactor-consequence evaluation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Walton, J.J.; Alpert, D.J.; Johnson, J.D.

    1982-01-01

    This work explores the use of principal components analysis coupled to three-dimensional atmospheric transport and dispersion models for evaluating the environmental consequences of reactor accidents. This permits the inclusion of meteorological data from multiple sites and the effects of topography in the consequence evaluation; features not normally included in such analyses. The technique identifies prevailing regional wind patterns and their frequencies for use in the transport and dispersion calculations. Analysis of a hypothetical accident scenario involving a release of radioactivity from a reactor situated in a river valley indicated the technique is quite useful whenever recurring wind patterns exist, as is often the case in complex terrain situations. Considerable differences were revealed in a comparison with results obtained from a more conventional Gaussian plume model using only the reactor site meteorology and no topographic effects

  19. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  20. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  1. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  2. Development of pure component property models for chemical product-process design and analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao

    information on the degree of accuracy of the property estimates. In addition, a method based on the ‘molecular structural similarity criteria’ is developed so that efficient use of knowledge of properties could be made in the development/improvement of property models. This method, in principle, can...... modeling such as: (i) quantity of property data used for the parameter regression; (ii) selection of the most appropriate form of the property model function; and (iii) the accuracy and thermodynamic consistency of predicted property values are also discussed. The developed models have been implemented...

  3. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  4. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  5. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ani Shabri

    2014-01-01

    Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  6. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  7. Components of spatial information management in wildlife ecology: Software for statistical and modeling analysis [Chapter 14

    Science.gov (United States)

    Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman

    2010-01-01

    Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...

  8. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  9. Structural analysis and incipient failure detection of primary circuit components based on correlation-analysis and finite-element models

    International Nuclear Information System (INIS)

    Olma, B.J.

    1977-01-01

    A method is presented to compute vibrational power spectral densities (VPSD's) of primary circuit components based on a finite-element representation of the primary circuit. First this method has been applied to the sodium cooled reactor KNK, Karlsruhe. Now a further application is being developed for a BWR-nuclear power plant. The experimentally determined VPSD's can be considered as the output of a multiple input-output system. They have to be explained as the frequency response of a multidimensional mechanical system, which is excited by stochastic and deterministic mechanical driving forces. The stochastic mechanical forces are generated by the dynamic pressure fluctuations of the fluid. The deterministic mechanical forces are caused by the pressure fluctuations, which are induced by the main coolant pumps or by standing waves. The excitation matrix can be obtained from measured pressure fluctuations. The vibration transfer function matrix can be computed from the mass matrix, damping matrix and stiffness matrix of a theoretical finite-element model or mass-spring model. Based on this theory the computer code 'STAMPO' has been established. This program has been applied to the KNK reactor. The excitation matrix was created from measured jet-noise pressure fluctuations. The mass-, stiffness- and damping matrix has been extracted from a SAP-IV-model of the primary system. Sequentially for each frequency point the complete VPSD matrix has been computed. The diagonal elements of this matrix represent the vibrational auto-power spectral densities, the off-diagonal elements represent the vibrational cross-power spectral densities. The calculations give good agreement with measured VPSD's. The comparison shows that the measured jet-noise pressure fluctuations act nearly uncorrelated on the structure, whereas the output VPSD's are well correlated

  10. Modeling and stability analysis for the upper atmosphere research satellite auxiliary array switch component

    Science.gov (United States)

    Wolfgang, R.; Natarajan, T.; Day, J.

    1987-01-01

    A feedback control system, called an auxiliary array switch, was designed to connect or disconnect auxiliary solar panel segments from a spacecraft electrical bus to meet fluctuating demand for power. A simulation of the control system was used to carry out a number of design and analysis tasks that could not economically be performed with a breadboard of the hardware. These tasks included: (1) the diagnosis of a stability problem, (2) identification of parameters to which the performance of the control system was particularly sensitive, (3) verification that the response of the control system to anticipated fluctuations in the electrical load of the spacecraft was satisfactory, and (4) specification of limitations on the frequency and amplitude of the load fluctuations.

  11. Multi-Trait analysis of growth traits: fitting reduced rank models using principal components for Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Rodrigo Reis Mota

    2016-09-01

    Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.

  12. Analysis of Dual Mobility Liner Rim Damage Using Retrieved Components and Cadaver Models.

    Science.gov (United States)

    Nebergall, Audrey K; Freiberg, Andrew A; Greene, Meridith E; Malchau, Henrik; Muratoglu, Orhun; Rowell, Shannon; Zumbrunn, Thomas; Varadarajan, Kartik M

    2016-07-01

    The objective of this study was to assess the retentive rim of retrieved dual mobility liners for visible evidence of deformation from femoral neck contact and to use cadaver models to determine if anterior soft tissue impingement could contribute to such deformation. Fifteen surgically retrieved polyethylene liners were assessed for evidence of rim deformation. The average time in vivo was 31.4 months, and all patients were revised for reasons other than intraprosthetic dislocation. Liner interaction with the iliopsoas was studied visually and with fluoroscopy in cadaver specimens using a dual mobility system different than the retrieval study. For fluoroscopic visualization, a metal wire was sutured to the iliopsoas and wires were also embedded into grooves on the outer surface of the liner and the inner head. All retrievals showed evidence of femoral neck contact. The cadaver experiments showed that liner motion was impeded by impingement with the iliopsoas tendon in low flexion angles. When observing the hip during maximum hyperextension, 0°, 15°, and 30° of flexion, there was noticeable tenting of the iliopsoas caused by impingement with the liner. Liner rim deformation resulting from contact with the femoral neck likely begins during early in vivo function. The presence of deformation is indicative of a mechanism inhibiting mobility of the liner. The cadaver studies showed that liner motion could be impeded because of its impingement with the iliopsoas. Such soft tissue impingement may be one mechanism by which liner motion is routinely inhibited, which can result in load transfer from the neck to the rim. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Improved separability of dipole sources by tripolar versus conventional disk electrodes: a modeling study using independent component analysis.

    Science.gov (United States)

    Cao, H; Besio, W; Jones, S; Medvedev, A

    2009-01-01

    Tripolar electrodes have been shown to have less mutual information and higher spatial resolution than disc electrodes. In this work, a four-layer anisotropic concentric spherical head computer model was programmed, then four configurations of time-varying dipole signals were used to generate the scalp surface signals that would be obtained with tripolar and disc electrodes, and four important EEG artifacts were tested: eye blinking, cheek movements, jaw movements, and talking. Finally, a fast fixed-point algorithm was used for signal independent component analysis (ICA). The results show that signals from tripolar electrodes generated better ICA separation results than from disc electrodes for EEG signals with these four types of artifacts.

  14. Principal component analysis in an experimental cold flow model of a fluid catalytic cracking unit by gammametry

    International Nuclear Information System (INIS)

    Araujo, Janeo Severino C. de; Dantas, Carlos Costa; Santos, Valdemir A. dos; Souza, Jose Edson G. de; Luna-Finkler, Christine L.

    2009-01-01

    The fluid dynamic behavior of riser of a cold flow model of a Fluid Catalytic Cracking Unit (FCCU) was investigated. The experimental data were obtained by the nuclear technique of gamma transmission. A gamma source was placed diametrically opposite to a detector in any straight section of the riser. The gas-solid flow through riser was monitored with a source of Americium-241 what allowed obtaining information of the axial solid concentration without flow disturbance and also identifying the dependence of this concentration profile with several independent variables. The MatLab R and Statistica R software were used. Statistica tool employed was the Principal Components Analysis (PCA), that consisted of the job of the data organization, through two-dimensional head offices to allow extract relevant information about the importance of the independent variables on axial solid concentration in a cold flow riser. The variables investigated were mass flow rate of solid, mass flow rate of gas, pressure in the riser base and the relative height in the riser. The first two components reached about 98 % of accumulated percentage of explained variance. (author)

  15. Hapten design and indirect competitive immunoassay for parathion determination: Correlation with molecular modeling and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Liu Yihua [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China); Jin Maojun [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China); Gui Wenjun [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China); Cheng Jingli [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China); Guo Yirong [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China); Zhu Guonian [Institute of Pesticide and Environmental Toxicology, Zhejiang University, Hangzhou 310029 (China)]. E-mail: zhugn@zju.edu.cn

    2007-05-22

    A novel procedure for parathion hapten design is described. The optimal antigen for parathion was selected after molecular modeling studies of six types of potentially immunizing haptens with the aim to identify the best mimicking target analyte. Heterologous competitive indirect enzyme-linked immunosorbent assay (ELISA) was developed after screening a battery of competitors as coating antigens. The relationship between the heterology degree of the competitor and the resulting immunoassay detectability was investigated according to the electronic similarities of the competitor haptens and the target analyte. Molecular modeling and principal component analysis were performed to understand the electronic distribution and steric parameters of the haptens at their minimum energetic levels. The results suggested that the competitors should have a high heterology to produce assays with good detectability values. An indirect competitive ELISA was finally selected for further investigation. The immunoassay had an IC{sub 50} value of 4.79 ng mL{sup -1} and a limit of detection of 0.31 ng mL{sup -1}. There was little or no cross-reactivity to similar compounds tested except for the insecticide parathion-methyl, which showed a cross-reactivity of 7.8%.

  16. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  17. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  18. Functional Generalized Structured Component Analysis.

    Science.gov (United States)

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  19. Identification and analysis of labor productivity components based on ACHIEVE model (case study: staff of Kermanshah University of Medical Sciences).

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2014-12-15

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach's alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees' viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities.

  20. Identification and Analysis of Labor Productivity Components Based on ACHIEVE Model (Case Study: Staff of Kermanshah University of Medical Sciences)

    Science.gov (United States)

    Ziapour, Arash; Khatony, Alireza; Kianipour, Neda; Jafary, Faranak

    2015-01-01

    Identification and analysis of the components of labor productivity based on ACHIEVE model was performed among employees in different parts of Kermanshah University of Medical Sciences in 2014. This was a descriptive correlational study in which the population consisted of 270 working personnel in different administrative groups (contractual, fixed- term and regular) at Kermanshah University of Medical Sciences (872 people) that were selected among 872 people through stratified random sampling method based on Krejcie and Morgan sampling table. The survey tool included labor productivity questionnaire of ACHIEVE. Questionnaires were confirmed in terms of content and face validity, and their reliability was calculated using Cronbach’s alpha coefficient. The data were analyzed by SPSS-18 software using descriptive and inferential statistics. The mean scores for labor productivity dimensions of the employees, including environment (environmental fit), evaluation (training and performance feedback), validity (valid and legal exercise of personnel), incentive (motivation or desire), help (organizational support), clarity (role perception or understanding), ability (knowledge and skills) variables and total labor productivity were 4.10±0.630, 3.99±0.568, 3.97±0.607, 3.76±0.701, 3.63±0.746, 3.59±0.777, 3.49±0.882 and 26.54±4.347, respectively. Also, the results indicated that the seven factors of environment, performance assessment, validity, motivation, organizational support, clarity, and ability were effective in increasing labor productivity. The analysis of the current status of university staff in the employees’ viewpoint suggested that the two factors of environment and evaluation, which had the greatest impact on labor productivity in the viewpoint of the staff, were in a favorable condition and needed to be further taken into consideration by authorities. PMID:25560364

  1. Dynamic thermal modelling and analysis of press-pack IGBTs both at component-level and chip-level

    DEFF Research Database (Denmark)

    Busca, Cristian; Teodorescu, Remus; Blaabjerg, Frede

    2013-01-01

    curves under various mechanical clamping conditions are derived. Moreover, the deformation of the internal components of the PP IGBT under operating-like conditions is investigated with the help of the thermal models and the coefficient of thermal expansion (CTE) information.......Thermal models are needed when designing power converters for Wind Turbines (WTs) in order to carry out thermal and reliability assessment of certain designs. Usually the thermal models of Insulated Gate Bipolar Transistors (IGBTs) are given in the datasheet in various forms at component......-level, not taking into account the thermal distribution among the chips. This is especially relevant in the case of Press-Pack (PP) IGBTs because any non-uniformity of the clamping pressure can affect the chip-level thermal impedances. This happens because the contact thermal resistances in the thermal impedance...

  2. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  3. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  4. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  5. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  6. Source apportionment of ambient non-methane hydrocarbons in Hong Kong: application of a principal component analysis/absolute principal component scores (PCA/APCS) receptor model.

    Science.gov (United States)

    Guo, H; Wang, T; Louie, P K K

    2004-06-01

    Receptor-oriented source apportionment models are often used to identify sources of ambient air pollutants and to estimate source contributions to air pollutant concentrations. In this study, a PCA/APCS model was applied to the data on non-methane hydrocarbons (NMHCs) measured from January to December 2001 at two sampling sites: Tsuen Wan (TW) and Central & Western (CW) Toxic Air Pollutants Monitoring Stations in Hong Kong. This multivariate method enables the identification of major air pollution sources along with the quantitative apportionment of each source to pollutant species. The PCA analysis identified four major pollution sources at TW site and five major sources at CW site. The extracted pollution sources included vehicular internal engine combustion with unburned fuel emissions, use of solvent particularly paints, liquefied petroleum gas (LPG) or natural gas leakage, and industrial, commercial and domestic sources such as solvents, decoration, fuel combustion, chemical factories and power plants. The results of APCS receptor model indicated that 39% and 48% of the total NMHCs mass concentrations measured at CW and TW were originated from vehicle emissions, respectively. 32% and 36.4% of the total NMHCs were emitted from the use of solvent and 11% and 19.4% were apportioned to the LPG or natural gas leakage, respectively. 5.2% and 9% of the total NMHCs mass concentrations were attributed to other industrial, commercial and domestic sources, respectively. It was also found that vehicle emissions and LPG or natural gas leakage were the main sources of C(3)-C(5) alkanes and C(3)-C(5) alkenes while aromatics were predominantly released from paints. Comparison of source contributions to ambient NMHCs at the two sites indicated that the contribution of LPG or natural gas at CW site was almost twice that at TW site. High correlation coefficients (R(2) > 0.8) between the measured and predicted values suggested that the PCA/APCS model was applicable for estimation

  7. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  8. ARTS: A System-Level Framework for Modeling MPSoC Components and Analysis of their Causality

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Storgaard, Michael; Madsen, Jan

    2005-01-01

    Designing complex heterogeneousmultiprocessor Systemon- Chip (MPSoC) requires support for modeling and analysis of the different layers i.e. application, operating system (OS) and platform architecture. This paper presents an abstract system-level modeling framework, called ARTS, to support...

  9. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  10. Constitutive modeling and finite element procedure development for stress analysis of prismatic high temperature gas cooled reactor graphite core components

    International Nuclear Information System (INIS)

    Mohanty, Subhasish; Majumdar, Saurindranath; Srinivasan, Makuteswara

    2013-01-01

    Highlights: • Finite element procedure developed for stress analysis of HTGR graphite component. • Realistic fluence profile and reflector brick shape considered for the simulation. • Also realistic H-451 grade material properties considered for simulation. • Typical outer reflector of a GT-MHR type reactor considered for numerical study. • Based on the simulation results replacement of graphite bricks can be scheduled. -- Abstract: High temperature gas cooled reactors, such as prismatic and pebble bed reactors, are increasingly becoming popular because of their inherent safety, high temperature process heat output, and high efficiency in nuclear power generation. In prismatic reactors, hexagonal graphite bricks are used as reflectors and fuel bricks. In the reactor environment, graphite bricks experience high temperature and neutron dose. This leads to dimensional changes (swelling and or shrinkage) of these bricks. Irradiation dimensional changes may affect the structural integrity of the individual bricks as well as of the overall core. The present paper presents a generic procedure for stress analysis of prismatic core graphite components using graphite reflector as an example. The procedure is demonstrated through commercially available ABAQUS finite element software using the option of user material subroutine (UMAT). This paper considers General Atomics Gas Turbine-Modular Helium Reactor (GT-MHR) as a bench mark design to perform the time integrated stress analysis of a typical reflector brick considering realistic geometry, flux distribution and realistic irradiation material properties of transversely isotropic H-451 grade graphite

  11. Constitutive modeling and finite element procedure development for stress analysis of prismatic high temperature gas cooled reactor graphite core components

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish, E-mail: smohanty@anl.gov [Argonne National Laboratory, South Cass Avenue, Argonne, IL 60439 (United States); Majumdar, Saurindranath [Argonne National Laboratory, South Cass Avenue, Argonne, IL 60439 (United States); Srinivasan, Makuteswara [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)

    2013-07-15

    Highlights: • Finite element procedure developed for stress analysis of HTGR graphite component. • Realistic fluence profile and reflector brick shape considered for the simulation. • Also realistic H-451 grade material properties considered for simulation. • Typical outer reflector of a GT-MHR type reactor considered for numerical study. • Based on the simulation results replacement of graphite bricks can be scheduled. -- Abstract: High temperature gas cooled reactors, such as prismatic and pebble bed reactors, are increasingly becoming popular because of their inherent safety, high temperature process heat output, and high efficiency in nuclear power generation. In prismatic reactors, hexagonal graphite bricks are used as reflectors and fuel bricks. In the reactor environment, graphite bricks experience high temperature and neutron dose. This leads to dimensional changes (swelling and or shrinkage) of these bricks. Irradiation dimensional changes may affect the structural integrity of the individual bricks as well as of the overall core. The present paper presents a generic procedure for stress analysis of prismatic core graphite components using graphite reflector as an example. The procedure is demonstrated through commercially available ABAQUS finite element software using the option of user material subroutine (UMAT). This paper considers General Atomics Gas Turbine-Modular Helium Reactor (GT-MHR) as a bench mark design to perform the time integrated stress analysis of a typical reflector brick considering realistic geometry, flux distribution and realistic irradiation material properties of transversely isotropic H-451 grade graphite.

  12. Myocardial imaging with thallium-201: an experimental model for analysis of the true myocardial and background image components

    International Nuclear Information System (INIS)

    Narahara, K.A.; Hamilton, G.W.; Williams, D.L.; Gould, K.L.

    1977-01-01

    The true myocardial and background components of a resting thallium-201 myocardial image were determined in an experimental dog model. True background was determined by imaging after the heart had been removed and replaced with a water-filled balloon of equal size and shape. In all studies, the background estimated from the region surrounding the heart exceeded true background activity. Furthermore, the relationship between true myocardial background and that estimated from the pericardiac region was inconsistent. Background estimates based on the activity surrounding the heart were not accurate predictors of true background activity

  13. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  14. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...

  15. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  16. Modelling Livestock Component in FSSIM

    NARCIS (Netherlands)

    Thorne, P.J.; Hengsdijk, H.; Janssen, S.J.C.; Louhichi, K.; Keulen, van H.; Thornton, P.K.

    2009-01-01

    This document summarises the development of a ruminant livestock component for the Farm System Simulator (FSSIM). This includes treatments of energy and protein transactions in ruminant livestock that have been used as a basis for the biophysical simulations that will generate the input production

  17. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  18. Component of the risk analysis

    International Nuclear Information System (INIS)

    Martinez, I.; Campon, G.

    2013-01-01

    The power point presentation reviews issues like analysis of risk (Codex), management risk, preliminary activities manager, relationship between government and industries, microbiological danger and communication of risk

  19. A New Model for Birth Weight Prediction Using 2- and 3-Dimensional Ultrasonography by Principal Component Analysis: A Chinese Population Study.

    Science.gov (United States)

    Liao, Shuxin; Wang, Yunfang; Xiao, Shufang; Deng, Xujie; Fang, Bimei; Yang, Fang

    2018-03-30

    To establish a new model for birth weight prediction using 2- and 3-dimensional ultrasonography (US) by principal component analysis (PCA). Two- and 3-dimensional US was prospectively performed in women with normal singleton pregnancies within 7 days before delivery (37-41 weeks' gestation). The participants were divided into a development group (n = 600) and a validation group (n = 597). Principal component analysis and stepwise linear regression analysis were used to develop a new prediction model. The new model's accuracy in predicting fetal birth weight was confirmed by the validation group through comparisons with previously published formulas. A total of 1197 cases were recruited in this study. All interclass and intraclass correlation coefficients of US measurements were greater than 0.75. Two principal components (PCs) were considered primary in determining estimated fetal birth weight, which were derived from 9 US measurements. Stepwise linear regression analysis showed a positive association between birth weight and PC1 and PC2. In the development group, our model had a small mean percentage error (mean ± SD, 3.661% ± 3.161%). At least a 47.558% decrease in the mean percentage error and a 57.421% decrease in the standard deviation of the new model compared with previously published formulas were noted. The results were similar to those in the validation group, and the new model covered 100% of birth weights within 10% of actual birth weights. The birth weight prediction model based on 2- and 3-dimensional US by PCA could help improve the precision of estimated fetal birth weight. © 2018 by the American Institute of Ultrasound in Medicine.

  20. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  1. Modeling the degradation of nuclear components

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1993-01-01

    This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed

  2. Individual and culture-level components of survey response styles: A multi-level analysis using cultural models of selfhood.

    Science.gov (United States)

    Smith, Peter B; Vignoles, Vivian L; Becker, Maja; Owe, Ellinor; Easterbrook, Matthew J; Brown, Rupert; Bourguignon, David; Garðarsdóttir, Ragna B; Kreuzbauer, Robert; Cendales Ayala, Boris; Yuki, Masaki; Zhang, Jianxin; Lv, Shaobo; Chobthamkit, Phatthanakit; Jaafar, Jas Laile; Fischer, Ronald; Milfont, Taciano L; Gavreliuc, Alin; Baguma, Peter; Bond, Michael Harris; Martin, Mariana; Gausel, Nicolay; Schwartz, Seth J; Des Rosiers, Sabrina E; Tatarko, Alexander; González, Roberto; Didier, Nicolas; Carrasco, Diego; Lay, Siugmin; Nizharadze, George; Torres, Ana; Camino, Leoncio; Abuhamdeh, Sami; Macapagal, Ma Elizabeth J; Koller, Silvia H; Herman, Ginette; Courtois, Marie; Fritsche, Immo; Espinosa, Agustín; Villamar, Juan A; Regalia, Camillo; Manzi, Claudia; Brambilla, Maria; Zinkeng, Martina; Jalal, Baland; Kusdil, Ersin; Amponsah, Benjamin; Çağlar, Selinay; Mekonnen, Kassahun Habtamu; Möller, Bettina; Zhang, Xiao; Schweiger Gallo, Inge; Prieto Gil, Paula; Lorente Clemares, Raquel; Campara, Gabriella; Aldhafri, Said; Fülöp, Márta; Pyszczynski, Tom; Kesebir, Pelin; Harb, Charles

    2016-12-01

    Variations in acquiescence and extremity pose substantial threats to the validity of cross-cultural research that relies on survey methods. Individual and cultural correlates of response styles when using 2 contrasting types of response mode were investigated, drawing on data from 55 cultural groups across 33 nations. Using 7 dimensions of self-other relatedness that have often been confounded within the broader distinction between independence and interdependence, our analysis yields more specific understandings of both individual- and culture-level variations in response style. When using a Likert-scale response format, acquiescence is strongest among individuals seeing themselves as similar to others, and where cultural models of selfhood favour harmony, similarity with others and receptiveness to influence. However, when using Schwartz's (2007) portrait-comparison response procedure, acquiescence is strongest among individuals seeing themselves as self-reliant but also connected to others, and where cultural models of selfhood favour self-reliance and self-consistency. Extreme responding varies less between the two types of response modes, and is most prevalent among individuals seeing themselves as self-reliant, and in cultures favouring self-reliance. As both types of response mode elicit distinctive styles of response, it remains important to estimate and control for style effects to ensure valid comparisons. © 2016 International Union of Psychological Science.

  3. Modelization of cooling system components

    Energy Technology Data Exchange (ETDEWEB)

    Copete, Monica; Ortega, Silvia; Vaquero, Jose Carlos; Cervantes, Eva [Westinghouse Electric (Spain)

    2010-07-01

    In the site evaluation study for licensing a new nuclear power facility, the criteria involved could be grouped in health and safety, environment, socio-economics, engineering and cost-related. These encompass different aspects such as geology, seismology, cooling system requirements, weather conditions, flooding, population, and so on. The selection of the cooling system is function of different parameters as the gross electrical output, energy consumption, available area for cooling system components, environmental conditions, water consumption, and others. Moreover, in recent years, extreme environmental conditions have been experienced and stringent water availability limits have affected water use permits. Therefore, modifications or alternatives of current cooling system designs and operation are required as well as analyses of the different possibilities of cooling systems to optimize energy production taking into account water consumption among other important variables. There are two basic cooling system configurations: - Once-through or Open-cycle; - Recirculating or Closed-cycle. In a once-through cooling system (or open-cycle), water from an external water sources passes through the steam cycle condenser and is then returned to the source at a higher temperature with some level of contaminants. To minimize the thermal impact to the water source, a cooling tower may be added in a once-through system to allow air cooling of the water (with associated losses on site due to evaporation) prior to returning the water to its source. This system has a high thermal efficiency, and its operating and capital costs are very low. So, from an economical point of view, the open-cycle is preferred to closed-cycle system, especially if there are no water limitations or environmental restrictions. In a recirculating system (or closed-cycle), cooling water exits the condenser, goes through a fixed heat sink, and is then returned to the condenser. This configuration

  4. Structural analysis of NPP components and structures

    International Nuclear Information System (INIS)

    Saarenheimo, A.; Keinaenen, H.; Talja, H.

    1998-01-01

    Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)

  5. Component fragilities - data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1986-01-01

    As part of the component fragility research program sponsored by the US Nuclear Regulatory Commission, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment, by identifying, collecting and analyzing existing test data from various sources. BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices of various manufacturers and models. Through a cooperative agreement, BNL has also obtained test data from EPRI/ANCO. An analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. An extensive amount of additional fragility or high level test data exists. If completely collected and properly analyzed, the entire data bank is expected to greatly reduce the need for additional testing to establish fragility levels for most equipment

  6. Component fragilities. Data collection, analysis and interpretation

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.K.; Hofmayer, C.H.

    1985-01-01

    As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists

  7. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  8. Adjust of the residuals of the Arima model by means of the analysis of the residuals of the explanatory variables by means of the analysis of main components

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Montealegre Bocanegra, Jose Edgar

    2000-01-01

    Based on the previous knowledge and understanding of the causality relationships between the fields of surface temperature of the Pacific and North and South tropical Atlantic oceans and rainfall behaviour in Colombia, we purport to model those relations with a (statistical) transfer model. This work is aimed at improving the adjustment of the model for the monthly mean rainfall registered in Funza (nearby the Capital Bogota). The residues of ARIMA models with six explanatory variables may contribute some percentage to the explanation of the total variability of rainfall as a consequence of their interrelationship. Such effect can be represented as a summary of the six variables, which can be achieved with principal components, taking into account that they are not mutually dependent, since they are white noise time series

  9. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    International Nuclear Information System (INIS)

    Chetvertkov, Mikhail A.; Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James; Kim, Jinkoo

    2016-01-01

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  10. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    Energy Technology Data Exchange (ETDEWEB)

    Chetvertkov, Mikhail A., E-mail: chetvertkov@wayne.edu [Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan 48201 and Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Kim, Jinkoo [Department of Radiation Oncology, Stony Brook University Hospital, Stony Brook, New York 11794 (United States)

    2016-10-15

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  11. Modeling and analysis of the disk MHD generator component of a gas core reactor/MHD Rankine cycle space power system

    International Nuclear Information System (INIS)

    Welch, G.E.; Dugan, E.T.; Lear, W.E. Jr.; Appelbaum, J.G.

    1990-01-01

    A gas core nuclear reactor (GCR)/disk magnetohydrodynamic (MHD) generator direct closed Rankine space power system concept is described. The GCR/disk MHD generator marriage facilitates efficient high electric power density system performance at relatively high operating temperatures. The system concept promises high specific power levels, on the order of 1 kW e /kg. An overview of the disk MHD generator component magnetofluiddynamic and plasma physics theoretical modeling is provided. Results from a parametric design analysis of the disk MHD generator are presented and discussed

  12. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  13. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  14. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  15. Tweaking the Four-Component Model

    Science.gov (United States)

    Curzer, Howard J.

    2014-01-01

    By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…

  16. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  17. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  18. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  19. Overview of the model component in ECOCLIM

    DEFF Research Database (Denmark)

    Geels, Camilla; Boegh, Eva; Bendtsen, J

    and atmospheric models. We will use the model system to 1) quantify the potential effects of climate change on ecosystem exchange of GHG and 2) estimate the impacts of changes in management practices including land use change and nitrogen (N) loads. Here the various model components will be introduced...

  20. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  1. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  2. Analysis of the Relations among the Components of Technological Pedagogical and Content Knowledge (TPACK): A Structural Equation Model

    Science.gov (United States)

    Celik, Ismail; Sahin, Ismail; Akturk, Ahmet Oguz

    2014-01-01

    In the current study, the model of technological pedagogical and content knowledge (TPACK) is used as the theoretical framework in the process of data collection and interpretation of the results. This study analyzes the perceptions of 744 undergraduate students regarding their TPACK levels measured by responses to a survey developed by Sahin…

  3. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  4. Ultrastructural analysis of metal particles released from stainless steel and titanium miniplate components in an animal model.

    Science.gov (United States)

    Matthew, I R; Frame, J W

    1998-01-01

    Low-vacuum scanning electron microscopy (Ivac SEM) was used to characterize the appearance of metal particles released from stressed and unstressed Champy miniplates placed in dogs and to study the relationship of the debris to the surrounding tissues. Under general endotracheal anesthesia, two Champy miniplates (titanium or stainless steel) were placed on the frontal bone in an animal model. One miniplate was bent to fit the curvature of the frontal bone (unstressed) and another miniplate of the same material was bent in a curve until the midpoint was raised 3 mm above the ends. The latter miniplate adapted to the skull curvature under tension during screw insertion (stressed). The miniplates and surrounding tissues were retrieved after intervals of 4, 12, and 24 weeks. Decalcified sections were prepared and examined by light microscopy and Ivac SEM. Under Ivac SEM examination, the titanium particles had a smooth, polygonal outline. Stainless steel particles were typically spherical, with numerous small projections on the surface. Most particles were 1 to 10 microns in diameter. The tissue response to the particles was variable; some particles were covered by fibrous connective tissue or enclosed by bone, and others were intracellular. The metal particles released from stressed or unstressed Champy miniplates were similar, and this was related to their source of origin and duration within the tissues. The tissue response to the particles appeared to depend on their location.

  5. Dynamic Modal Analysis of Vertical Machining Centre Components

    OpenAIRE

    Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh

    2009-01-01

    The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...

  6. The Utility of Job Dimensions Based on Form B of the Position Analysis Questionnaire (PAQ) in a Job Component Validation Model. Report No. 5.

    Science.gov (United States)

    Marquardt, Lloyd D.; McCormick, Ernest J.

    The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…

  7. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  8. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  9. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  10. Independent component and pathway-based analysis of miRNA-regulated gene expression in a model of type 1 diabetes

    Directory of Open Access Journals (Sweden)

    Hagedorn Peter H

    2011-02-01

    Full Text Available Abstract Background Several approaches have been developed for miRNA target prediction, including methods that incorporate expression profiling. However the methods are still in need of improvements due to a high false discovery rate. So far, none of the methods have used independent component analysis (ICA. Here, we developed a novel target prediction method based on ICA that incorporates both seed matching and expression profiling of miRNA and mRNA expressions. The method was applied on a cellular model of type 1 diabetes. Results Microrray profiling identified eight miRNAs (miR-124/128/192/194/204/375/672/708 with differential expression. Applying ICA on the mRNA profiling data revealed five significant independent components (ICs correlating to the experimental conditions. The five ICs also captured the miRNA expressions by explaining >97% of their variance. By using ICA, seven of the eight miRNAs showed significant enrichment of sequence predicted targets, compared to only four miRNAs when using simple negative correlation. The ICs were enriched for miRNA targets that function in diabetes-relevant pathways e.g. type 1 and type 2 diabetes and maturity onset diabetes of the young (MODY. Conclusions In this study, ICA was applied as an attempt to separate the various factors that influence the mRNA expression in order to identify miRNA targets. The results suggest that ICA is better at identifying miRNA targets than negative correlation. Additionally, combining ICA and pathway analysis constitutes a means for prioritizing between the predicted miRNA targets. Applying the method on a model of type 1 diabetes resulted in identification of eight miRNAs that appear to affect pathways of relevance to disease mechanisms in diabetes.

  11. Principle component analysis (PCA) and second-order global hard-modelling for the complete resolution of transition metal ions complex formation with 1,10-phenantroline

    Energy Technology Data Exchange (ETDEWEB)

    Shariati-Rad, Masoud [Faculty of Chemistry, Bu-Ali Sina University, Hamedan 65174 (Iran, Islamic Republic of); Hasani, Masoumeh, E-mail: hasani@basu.ac.ir [Faculty of Chemistry, Bu-Ali Sina University, Hamedan 65174 (Iran, Islamic Republic of)

    2009-08-19

    Second-order global hard-modelling was applied to resolve the complex formation between Co{sup 2+}, Ni{sup 2+}, and Cd{sup 2+} cations and 1,10-phenantroline. The highly correlated spectral and concentration profiles of the species in these systems and low concentration of some species in the individual collected data matrices prevent the well-resolution of the profiles. Therefore, a collection of six equilibrium data matrices including series of absorption spectra taken with pH changes at different reactant ratios were analyzed. Firstly, a precise principle component analysis (PCA) of different augmented arrangements of the individual data matrices was used to distinguish the number of species involved in the equilibria. Based on the results of PCA, the equilibria included in the data were specified and second-order global hard-modelling of the appropriate arrangement of six collected equilibrium data matrices resulted in well-resolved profiles and equilibrium constants. The protonation constant of the ligand (1,10-phenantroline) and spectral profiles of its protonated and unprotonated forms are the additional information obtained by global analysis. For comparison, multivariate curve resolution-alternating least squares (MCR-ALS) was applied to the same data. The results showed that second-order global hard-modelling is more convenient compared with MCR-ALS especially for systems with completely known model. It can completely resolve the system and the concentration profiles which are closer to correct ones. Moreover, parameters showing the goodness of fit are better with second-order global hard-modelling.

  12. Principle component analysis (PCA) and second-order global hard-modelling for the complete resolution of transition metal ions complex formation with 1,10-phenantroline

    International Nuclear Information System (INIS)

    Shariati-Rad, Masoud; Hasani, Masoumeh

    2009-01-01

    Second-order global hard-modelling was applied to resolve the complex formation between Co 2+ , Ni 2+ , and Cd 2+ cations and 1,10-phenantroline. The highly correlated spectral and concentration profiles of the species in these systems and low concentration of some species in the individual collected data matrices prevent the well-resolution of the profiles. Therefore, a collection of six equilibrium data matrices including series of absorption spectra taken with pH changes at different reactant ratios were analyzed. Firstly, a precise principle component analysis (PCA) of different augmented arrangements of the individual data matrices was used to distinguish the number of species involved in the equilibria. Based on the results of PCA, the equilibria included in the data were specified and second-order global hard-modelling of the appropriate arrangement of six collected equilibrium data matrices resulted in well-resolved profiles and equilibrium constants. The protonation constant of the ligand (1,10-phenantroline) and spectral profiles of its protonated and unprotonated forms are the additional information obtained by global analysis. For comparison, multivariate curve resolution-alternating least squares (MCR-ALS) was applied to the same data. The results showed that second-order global hard-modelling is more convenient compared with MCR-ALS especially for systems with completely known model. It can completely resolve the system and the concentration profiles which are closer to correct ones. Moreover, parameters showing the goodness of fit are better with second-order global hard-modelling.

  13. Probabilistic Modeling of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei

    Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...

  14. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  15. Simulation of Sediment Yield in a Semi-Arid River Basin under Changing Land Use: An Integrated Approach of Hydrologic Modelling and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Charles Gyamfi

    2016-11-01

    Full Text Available Intensified human activities over the past decades have culminated in the prevalence of dire environmental consequences of sediment yield resulting mainly from land use changes. Understanding the role that land use changes play in the dynamics of sediment yield would greatly enhance decision-making processes related to land use and water resources management. In this study, we investigated the impacts of land use and cover changes on sediment yield dynamics through an integrated approach of hydrologic modelling and principal component analysis (PCA. A three-phase land use scenario (2000, 2007 and 2013 employing the “fix-changing” method was used to simulate the sediment yield of the Olifants Basin. Contributions in the changes in individual land uses to sediment yield were assessed using the component and pattern matrixes of PCA. Our results indicate that sediment yield dynamics in the study area is significantly attributed to the changes in agriculture, urban and forested lands. Changes in agriculture and urban lands were directly proportional to sediment yield dynamics of the Olifants Basin. On the contrary, forested areas had a negative relationship with sediment yield indicating less sediment yield from these areas. The output of this research work provides a simplistic approach of evaluating the impacts of land use changes on sediment yield. The tools and methods used are relevant for policy directions on land and water resources planning and management.

  16. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  17. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  18. BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Andrea Giovanni Spelta

    2007-05-01

    Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.

  19. Lifetime analysis of fusion-reactor components

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1983-01-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO

  20. Thermochemical modelling of multi-component systems

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2015-01-01

    Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)

  1. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  2. Equation-free analysis of two-component system signalling model reveals the emergence of co-existing phenotypes in the absence of multistationarity.

    Directory of Open Access Journals (Sweden)

    Rebecca B Hoyle

    Full Text Available Phenotypic differences of genetically identical cells under the same environmental conditions have been attributed to the inherent stochasticity of biochemical processes. Various mechanisms have been suggested, including the existence of alternative steady states in regulatory networks that are reached by means of stochastic fluctuations, long transient excursions from a stable state to an unstable excited state, and the switching on and off of a reaction network according to the availability of a constituent chemical species. Here we analyse a detailed stochastic kinetic model of two-component system signalling in bacteria, and show that alternative phenotypes emerge in the absence of these features. We perform a bifurcation analysis of deterministic reaction rate equations derived from the model, and find that they cannot reproduce the whole range of qualitative responses to external signals demonstrated by direct stochastic simulations. In particular, the mixed mode, where stochastic switching and a graded response are seen simultaneously, is absent. However, probabilistic and equation-free analyses of the stochastic model that calculate stationary states for the mean of an ensemble of stochastic trajectories reveal that slow transcription of either response regulator or histidine kinase leads to the coexistence of an approximate basal solution and a graded response that combine to produce the mixed mode, thus establishing its essential stochastic nature. The same techniques also show that stochasticity results in the observation of an all-or-none bistable response over a much wider range of external signals than would be expected on deterministic grounds. Thus we demonstrate the application of numerical equation-free methods to a detailed biochemical reaction network model, and show that it can provide new insight into the role of stochasticity in the emergence of phenotypic diversity.

  3. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  4. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  5. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  6. Mapping ash properties using principal components analysis

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones

    2017-04-01

    In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2

  7. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  8. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  9. Probabilistic risk analysis of aging components which fail on demand. A Bayesian model. Application to maintenance optimization of Diesel engine linings

    International Nuclear Information System (INIS)

    Procaccia, H.; Lannoy, A.; Clarotti, C.A.

    1997-01-01

    The optimization of preventive maintenance of aging materials can be well modelled through the application of the two-parameters Weibull law. Unfortunately there is no theoretical model describing the aging of components implied in emergency interventions which may fail in very moment of demand. These components are relevant to the safety of Nuclear Power Plants and the main problem with them is the optimization of the maintenance and the test schedule. Indeed the effect of any test is twofold, namely: the test reveals failures which otherwise would occur when the Power Plant would need urgent intervention but at the same time it causes the aging process to step up. The aging process is suspected to affect certain safeguard equipment, but it does not exist a theoretical model of component aging in function of demands. Sections 2.1 and 2.2 of the paper are devoted to the description of a Bayesian model of discrete aging, deriving the likelihood function of the model (data modelling) and to selecting 'the best prior' (Analyst knowledge) that can be associated to the likelihood, respectively. The aspects of numerical integration relative to assessing the predictive reliability of the aging components are discussed in Section 2.3. Section 3 reports a Reliability Centered Maintenance application of the model to cylinders of emergency electrical generators of the NPP nuclear sectors. These components are subjected to an annual endoscopic checking and to a systematical replacement at every 5 years. The statistical Bayesian decision theory was applied to determine an optimal period of cylinder replacement based on the aging model presented in this paper. It is shown that the optimal value is between 10 to 12 years. (author)

  10. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  11. Robustness of Component Models in Energy System Simulators

    DEFF Research Database (Denmark)

    Elmegaard, Brian

    2003-01-01

    During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI......). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...

  12. Localized indoor air quality monitoring for indoor pollutants' healthy risk assessment using sub-principal component analysis driven model and engineering big data

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Honglan; Kim, MinJeong; Lee, SeungChul; Pyo, SeHee; Esfahani, Iman Janghorban; Yoo, ChangKyoo [Kyung Hee University, Yongin (Korea, Republic of)

    2015-10-15

    Indoor air quality (IAQ) in subway systems shows periodic dynamics due to the number of passengers, train schedules, and air pollutants accumulated in the system, which are considered as an engineering big data. We developed a new IAQ monitoring model using a sub-principal component analysis (sub-PCA) method to account for the periodic dynamics of the IAQ big data. In addition, the IAQ data in subway systems are different on the weekdays and weekend due to weekly effect, since the patterns of the number of passengers and their access time on the weekdays and weekend are different. Sub-PCA-based local monitoring was developed for separating the weekday and weekend environmental IAQ big data, respectively. The monitoring results for the test data at the Y-subway station clearly showed that the proposed method could analyze an environmental IAQ big data, improve the monitoring efficiency and greatly reduce the false alarm rate of the local on-line monitoring by comparison with the multi-way PCA.

  13. Localized indoor air quality monitoring for indoor pollutants' healthy risk assessment using sub-principal component analysis driven model and engineering big data

    International Nuclear Information System (INIS)

    Shi, Honglan; Kim, MinJeong; Lee, SeungChul; Pyo, SeHee; Esfahani, Iman Janghorban; Yoo, ChangKyoo

    2015-01-01

    Indoor air quality (IAQ) in subway systems shows periodic dynamics due to the number of passengers, train schedules, and air pollutants accumulated in the system, which are considered as an engineering big data. We developed a new IAQ monitoring model using a sub-principal component analysis (sub-PCA) method to account for the periodic dynamics of the IAQ big data. In addition, the IAQ data in subway systems are different on the weekdays and weekend due to weekly effect, since the patterns of the number of passengers and their access time on the weekdays and weekend are different. Sub-PCA-based local monitoring was developed for separating the weekday and weekend environmental IAQ big data, respectively. The monitoring results for the test data at the Y-subway station clearly showed that the proposed method could analyze an environmental IAQ big data, improve the monitoring efficiency and greatly reduce the false alarm rate of the local on-line monitoring by comparison with the multi-way PCA.

  14. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  15. Pool scrubbing models for iodine components

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)

    1996-12-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.

  16. Pool scrubbing models for iodine components

    International Nuclear Information System (INIS)

    Fischer, K.

    1996-01-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs

  17. Analysis of spiral components in 16 galaxies

    International Nuclear Information System (INIS)

    Considere, S.; Athanassoula, E.

    1988-01-01

    A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies

  18. Reformulating Component Identification as Document Analysis Problem

    NARCIS (Netherlands)

    Gross, H.G.; Lormans, M.; Zhou, J.

    2007-01-01

    One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can

  19. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  20. Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Berzonskis, Arvydas; Sørensen, John Dalsgaard

    2016-01-01

    in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....

  1. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  2. Analysis and Modeling for Short- to Medium-Term Load Forecasting Using a Hybrid Manifold Learning Principal Component Model and Comparison with Classical Statistical Models (SARIMAX, Exponential Smoothing and Artificial Intelligence Models (ANN, SVM: The Case of Greek Electricity Market

    Directory of Open Access Journals (Sweden)

    George P. Papaioannou

    2016-08-01

    Full Text Available In this work we propose a new hybrid model, a combination of the manifold learning Principal Components (PC technique and the traditional multiple regression (PC-regression, for short and medium-term forecasting of daily, aggregated, day-ahead, electricity system-wide load in the Greek Electricity Market for the period 2004–2014. PC-regression is shown to effectively capture the intraday, intraweek and annual patterns of load. We compare our model with a number of classical statistical approaches (Holt-Winters exponential smoothing of its generalizations Error-Trend-Seasonal, ETS models, the Seasonal Autoregressive Moving Average with exogenous variables, Seasonal Autoregressive Integrated Moving Average with eXogenous (SARIMAX model as well as with the more sophisticated artificial intelligence models, Artificial Neural Networks (ANN and Support Vector Machines (SVM. Using a number of criteria for measuring the quality of the generated in-and out-of-sample forecasts, we have concluded that the forecasts of our hybrid model outperforms the ones generated by the other model, with the SARMAX model being the next best performing approach, giving comparable results. Our approach contributes to studies aimed at providing more accurate and reliable load forecasting, prerequisites for an efficient management of modern power systems.

  3. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  4. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  5. Improving the robustness of a partial least squares (PLS) model based on pure component selectivity analysis and range optimization: Case study for the analysis of an etching solution containing hydrogen peroxide

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngbok [Department of Chemistry, College of Natural Sciences, Hanyang University Haengdang-Dong, Seoul 133-791 (Korea, Republic of); Chung, Hoeil [Department of Chemistry, College of Natural Sciences, Hanyang University Haengdang-Dong, Seoul 133-791 (Korea, Republic of)]. E-mail: hoeil@hanyang.ac.kr; Arnold, Mark A. [Optical Science and Technology Center and Department of Chemistry, University of Iowa, Iowa City, IA 52242 (United States)

    2006-07-14

    Pure component selectivity analysis (PCSA) was successfully utilized to enhance the robustness of a partial least squares (PLS) model by examining the selectivity of a given component to other components. The samples used in this study were composed of NH{sub 4}OH, H{sub 2}O{sub 2} and H{sub 2}O, a popular etchant solution in the electronic industry. Corresponding near-infrared (NIR) spectra (9000-7500 cm{sup -1}) were used to build PLS models. The selective determination of H{sub 2}O{sub 2} without influences from NH{sub 4}OH and H{sub 2}O was a key issue since its molecular structure is similar to that of H{sub 2}O and NH{sub 4}OH also has a hydroxyl functional group. The best spectral ranges for the determination of NH{sub 4}OH and H{sub 2}O{sub 2} were found with the use of moving window PLS (MW-PLS) and corresponding selectivity was examined by pure component selectivity analysis. The PLS calibration for NH{sub 4}OH was free from interferences from the other components due to the presence of its unique NH absorption bands. Since the spectral variation from H{sub 2}O{sub 2} was broadly overlapping and much less distinct than that from NH{sub 4}OH, the selectivity and prediction performance for the H{sub 2}O{sub 2} calibration were sensitively varied depending on the spectral ranges and number of factors used. PCSA, based on the comparison between regression vectors from PLS and the net analyte signal (NAS), was an effective method to prevent over-fitting of the H{sub 2}O{sub 2} calibration. A robust H{sub 2}O{sub 2} calibration model with minimal interferences from other components was developed. PCSA should be included as a standard method in PLS calibrations where prediction error only is the usual measure of performance.

  6. BWR Refill-Reflood Program, Task 4.7 - model development: TRAC-BWR component models

    International Nuclear Information System (INIS)

    Cheung, Y.K.; Parameswaran, V.; Shaug, J.C.

    1983-09-01

    TRAC (Transient Reactor Analysis Code) is a computer code for best-estimate analysis for the thermal hydraulic conditions in a reactor system. The development and assessment of the BWR component models developed under the Refill/Reflood Program that are necessary to structure a BWR-version of TRAC are described in this report. These component models are the jet pump, steam separator, steam dryer, two-phase level tracking model, and upper-plenum mixing model. These models have been implemented into TRAC-B02. Also a single-channel option has been developed for individual fuel-channel analysis following a system-response calculation

  7. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  8. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  9. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  10. Probabilistic methods in nuclear power plant component ageing analysis

    International Nuclear Information System (INIS)

    Simola, K.

    1992-03-01

    The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience

  11. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  12. Thermogravimetric analysis of combustible waste components

    DEFF Research Database (Denmark)

    Munther, Anette; Wu, Hao; Glarborg, Peter

    In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...

  13. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  14. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    Science.gov (United States)

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  15. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    AlAmmouri, Ahmad

    2016-07-26

    Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.

  16. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  17. Preliminary study of soil permeability properties using principal component analysis

    Science.gov (United States)

    Yulianti, M.; Sudriani, Y.; Rustini, H. A.

    2018-02-01

    Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.

  18. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  19. Analysis of failed nuclear plant components

    Science.gov (United States)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  20. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1993-01-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  1. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  2. Formal Model-Driven Engineering: Generating Data and Behavioural Components

    Directory of Open Access Journals (Sweden)

    Chen-Wei Wang

    2012-12-01

    Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.

  3. Public health component in building information modeling

    Science.gov (United States)

    Trufanov, A. I.; Rossodivita, A.; Tikhomirov, A. A.; Berestneva, O. G.; Marukhina, O. V.

    2018-05-01

    A building information modelling (BIM) conception has established itself as an effective and practical approach to plan, design, construct, and manage buildings and infrastructure. Analysis of the governance literature has shown that the BIM-developed tools do not take fully into account the growing demands from ecology and health fields. In this connection, it is possible to offer an optimal way of adapting such tools to the necessary consideration of the sanitary and hygienic specifications of materials used in construction industry. It is proposed to do it through the introduction of assessments that meet the requirements of national sanitary standards. This approach was demonstrated in the case study of Revit® program.

  4. A radiographic analysis of implant component misfit.

    LENUS (Irish Health Repository)

    Sharkey, Seamus

    2011-07-01

    Radiographs are commonly used to assess the fit of implant components, but there is no clear agreement on the amount of misfit that can be detected by this method. This study investigated the effect of gap size and the relative angle at which a radiograph was taken on the detection of component misfit. Different types of implant connections (internal or external) and radiographic modalities (film or digital) were assessed.

  5. Accurate modeling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  6. Accurate modelling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  7. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  8. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  9. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  10. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  11. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  12. Multi-spectrometer calibration transfer based on independent component analysis.

    Science.gov (United States)

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  13. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  14. Project of integrity assessment of flawed components with structural discontinuity (IAF). Data book for residual stress analysis in weld joint. Analysis model of dissimilar metal weld joint applied post weld heat treatment (PWHT)

    International Nuclear Information System (INIS)

    2012-12-01

    The project of Integrity Assessment of Flawed Components with Structural Discontinuity (IAF) was entrusted to Japan Power Engineering and Inspection Corporation (JAPEIC) from Nuclear and Industrial Safety Agency (NISA) and started from FY 2001. And then, it was taken over to Japan Nuclear Energy Safety Organization (JNES) which was established in October 2003 and carried out until FY 2007. In the IAF project, weld joints between nickel based alloys and low alloy steels around penetrations in reactor vessel, safe-end of nozzles and shroud supports were selected from among components and pipe arrangements in nuclear power plants, where high residual stresses were generated due to welding and complex structure. Residual stresses around of the weld joints were estimated by finite element analysis method (FEM) with a general modeling method, then the reasonability and the conservativeness was evaluated. In addition, for postulated surface crack of stress corrosion cracking (SCC), a simple calculation method of stress intensity factor (K) required to estimate the crack growth was proposed and the effectiveness was confirmed. JNES compiled results of the IAF project into Data Books of Residual Stress Analysis of Weld Joint, and Data Book of Simplified Stress Intensity Factor Calculation for Penetration of Reactor as typical Structure Discontinuity, respectively. Data Books of Residual Stress Analysis in Weld Joint. 1. Butt Weld Joint of Small Diameter Cylinder (4B Sch40) (JNES-RE-2012-0005), 2. Dissimilar Metal Weld Joint in Safe End (One-Side Groove Joint (JNES-RE-2012-0006), 3. Dissimilar Metal Weld Joint in Safe End (Large Diameter Both-Side Groove Joint) (JNES-RE-2012-0007), 4. Weld Joint around Penetrations in Reactor Vessel (Insert Joint) (JNES-RE-2012-0008), 5. Weld Joint in Shroud Support (H8, H9, H10 and H11 Welds) (JNES-RE-2012-0009), 6. Analysis Model of Dissimilar Metal Weld Joint Applied Post Weld Heat Treatment (PWHT) (JNES-RE-2012-0010). Data Book of

  15. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  16. Sensitivity analysis of a parameterization of the stomatal component of the DO{sub 3}SE model for Quercus ilex to estimate ozone fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Rocio [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: rocio.alonso@ciemat.es; Elvira, Susana [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: susana.elvira@ciemat.es; Sanz, Maria J. [Fundacion CEAM, Charles Darwin 14, 46980 Paterna, Valencia (Spain)], E-mail: mjose@ceam.es; Gerosa, Giacomo [Department of Mathematics and Physics, Universita Cattolica del Sacro Cuore, via Musei 41, 25121 Brescia (Italy)], E-mail: giacomo.gerosa@unicatt.it; Emberson, Lisa D. [Stockholm Environment Institute, University of York, York YO 10 5DD (United Kingdom)], E-mail: lde1@york.ac.uk; Bermejo, Victoria [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: victoria.bermejo@ciemat.es; Gimeno, Benjamin S. [Ecotoxicology of Air Pollution, CIEMAT, Avenida Complutense 22, 28040 Madrid (Spain)], E-mail: benjamin.gimeno@ciemat.es

    2008-10-15

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g{sub s}) module of the European ozone deposition model (DO{sub 3}SE) for Quercus ilex was performed. The performance of the model was tested against measured g{sub s} in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f{sub phen} and f{sub SWP} were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak.

  17. Sensitivity analysis of a parameterization of the stomatal component of the DO3SE model for Quercus ilex to estimate ozone fluxes

    International Nuclear Information System (INIS)

    Alonso, Rocio; Elvira, Susana; Sanz, Maria J.; Gerosa, Giacomo; Emberson, Lisa D.; Bermejo, Victoria; Gimeno, Benjamin S.

    2008-01-01

    A sensitivity analysis of a proposed parameterization of the stomatal conductance (g s ) module of the European ozone deposition model (DO 3 SE) for Quercus ilex was performed. The performance of the model was tested against measured g s in the field at three sites in Spain. The best fit of the model was found for those sites, or during those periods, facing no or mild stress conditions, but a worse performance was found under severe drought or temperature stress, mostly occurring at continental sites. The best performance was obtained when both f phen and f SWP were included. A local parameterization accounting for the lower temperatures recorded in winter and the higher water shortage at the continental sites resulted in a better performance of the model. The overall results indicate that two different parameterizations of the model are needed, one for marine-influenced sites and another one for continental sites. - No redundancy between phenological and water-related modifying functions was found when estimating stomatal behavior of Holm oak

  18. Use of Human Modeling Simulation Software in the Task Analysis of the Environmental Control and Life Support System Component Installation Procedures

    Science.gov (United States)

    Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Virtual reality and simulation applications are becoming widespread in human task analysis. These programs have many benefits for the Human Factors Engineering field. Not only do creating and using virtual environments for human engineering analyses save money and time, this approach also promotes user experimentation and provides increased quality of analyses. This paper explains the human engineering task analysis performed on the Environmental Control and Life Support System (ECLSS) space station rack and its Distillation Assembly (DA) subsystem using EAI's human modeling simulation software, Jack. When installed on the International Space Station (ISS), ECLSS will provide the life and environment support needed to adequately sustain crew life. The DA is an Orbital Replaceable Unit (ORU) that provides means of wastewater (primarily urine from flight crew and experimental animals) reclamation. Jack was used to create a model of the weightless environment of the ISS Node 3, where the ECLSS is housed. Computer aided drawings of the ECLSS rack and DA system were also brought into the environment. Anthropometric models of a 95th percentile male and 5th percentile female were used to examine the human interfaces encountered during various ECLSS and DA tasks. The results of the task analyses were used in suggesting modifications to hardware and crew task procedures to improve accessibility, conserve crew time, and add convenience for the crew. This paper will address some of those suggested modifications and the method of presenting final analyses for requirements verification.

  19. Traceable components of terrestrial carbon storage capacity in biogeochemical models.

    Science.gov (United States)

    Xia, Jianyang; Luo, Yiqi; Wang, Ying-Ping; Hararuk, Oleksandra

    2013-07-01

    cycle models. Nevertheless, more research is needed to develop tools to decompose NPP and transient dynamics of the modeled carbon cycle into traceable components for structural analysis of land models. © 2013 Blackwell Publishing Ltd.

  20. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  1. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  2. Variational Bayesian Learning for Wavelet Independent Component Analysis

    Science.gov (United States)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  3. Nonlinear Principal Component Analysis Using Strong Tracking Filter

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.

  4. Advances in independent component analysis and learning machines

    CERN Document Server

    Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko

    2015-01-01

    In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t

  5. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  6. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  7. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  8. Columbia River Component Data Gap Analysis

    Energy Technology Data Exchange (ETDEWEB)

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  9. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  10. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  11. Quantifying identifiability in independent component analysis

    DEFF Research Database (Denmark)

    Sokol, Alexander; Maathuis, Marloes H.; Falkeborg, Benjamin

    2014-01-01

    We are interested in consistent estimation of the mixing matrix in the ICA model, when the error distribution is close to (but different from) Gaussian. In particular, we consider $n$ independent samples from the ICA model $X = A\\epsilon$, where we assume that the coordinates of $\\epsilon......$ are independent and identically distributed according to a contaminated Gaussian distribution, and the amount of contamination is allowed to depend on $n$. We then investigate how the ability to consistently estimate the mixing matrix depends on the amount of contamination. Our results suggest...

  12. Projection and analysis of nuclear components

    International Nuclear Information System (INIS)

    Heeschen, U.

    1980-01-01

    The classification and the types of analysis carried out in pipings for quality control and safety of nuclear power plants, are presented. The operation and emergency conditions with emphasis of possible simplifications of calculations are described. (author/M.C.K.) [pt

  13. Tritium permeation model for plasma facing components

    Science.gov (United States)

    Longhurst, G. R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.

  14. Tritium permeation model for plasma facing components

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included

  15. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  16. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  17. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  18. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  19. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  20. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  1. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.

    Science.gov (United States)

    Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J

    2018-03-01

    Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

  2. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  3. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  4. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  5. Modeling money demand components in Lebanon using autoregressive models

    International Nuclear Information System (INIS)

    Mourad, M.

    2008-01-01

    This paper analyses monetary aggregate in Lebanon and its different component methodology of AR model. Thirteen variables in monthly data have been studied for the period January 1990 through December 2005. Using the Augmented Dickey-Fuller (ADF) procedure, twelve variables are integrated at order 1, thus they need the filter (1-B)) to become stationary, however the variable X 1 3,t (claims on private sector) becomes stationary with the filter (1-B)(1-B 1 2) . The ex-post forecasts have been calculated for twelve horizons and for one horizon (one-step ahead forecast). The quality of forecasts has been measured using the MAPE criterion for which the forecasts are good because the MAPE values are lower. Finally, a pursuit of this research using the cointegration approach is proposed. (author)

  6. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    , communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...

  7. Problems of stress analysis of fuelling machine head components

    International Nuclear Information System (INIS)

    Mathur, D.D.

    1975-01-01

    The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)

  8. Modelling safety of multistate systems with ageing components

    Energy Technology Data Exchange (ETDEWEB)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna [Gdynia Maritime University, Department of Mathematics ul. Morska 81-87, Gdynia 81-225 Poland (Poland)

    2016-06-08

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.

  9. Modelling safety of multistate systems with ageing components

    International Nuclear Information System (INIS)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    2016-01-01

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.

  10. Rapid and interference-free analysis of nine B-group vitamins in energy drinks using trilinear component modeling of liquid chromatography-mass spectrometry data.

    Science.gov (United States)

    Hu, Yong; Wu, Hai-Long; Yin, Xiao-Li; Gu, Hui-Wen; Xiao, Rong; Xie, Li-Xia; Liu, Zhi; Fang, Huan; Wang, Li; Yu, Ru-Qin

    2018-04-01

    The aim of the present work was to develop a rapid and interference-free method based on liquid chromatography-mass spectrometry (LC-MS) for the simultaneous determination of nine B-group vitamins in various energy drinks. A smart and green strategy that modeled the three-way data array of LC-MS with second-order calibration methods based on alternating trilinear decomposition (ATLD) and alternating penalty trilinear decomposition (APTLD) algorithms was developed. By virtue of "mathematical separation" and "second-order advantage", the proposed strategy successfully solved the co-eluted peaks and unknown interferents in LC-MS analysis with the elution time less than 4.5min and simple sample preparation. Satisfactory quantitative results were obtained by the ATLD-LC-MS and APTLD-LC-MS methods for the spiked recovery assays, with the average spiked recoveries ranging from 87.2-113.9% to 92.0-111.7%, respectively. These results acquired from the proposed methods were confirmed by the LC-MS/MS method, which shows a quite good consistency with each other. All these results demonstrated that the developed chemometrics-assisted LC-MS strategy had advantages of being rapid, green, accurate and low-cost, and it could be an attractive alternative for the determination of multiple vitamins in complex food matrices, which required no laborious sample preparation, tedious condition optimization or more sophisticated instrumentations. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  13. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  14. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  15. Flexible Multibody Systems Models Using Composite Materials Components

    International Nuclear Information System (INIS)

    Neto, Maria Augusta; Ambr'osio, Jorge A. C.; Leal, Rog'erio Pereira

    2004-01-01

    The use of a multibody methodology to describe the large motion of complex systems that experience structural deformations enables to represent the complete system motion, the relative kinematics between the components involved, the deformation of the structural members and the inertia coupling between the large rigid body motion and the system elastodynamics. In this work, the flexible multibody dynamics formulations of complex models are extended to include elastic components made of composite materials, which may be laminated and anisotropic. The deformation of any structural member must be elastic and linear, when described in a coordinate frame fixed to one or more material points of its domain, regardless of the complexity of its geometry. To achieve the proposed flexible multibody formulation, a finite element model for each flexible body is used. For the beam composite material elements, the sections properties are found using an asymptotic procedure that involves a two-dimensional finite element analysis of their cross-section. The equations of motion of the flexible multibody system are solved using an augmented Lagrangian formulation and the accelerations and velocities are integrated in time using a multi-step multi-order integration algorithm based on the Gear method

  16. Fluoride in the Serra Geral Aquifer System: Source Evaluation Using Stable Isotopes and Principal Component Analysis

    OpenAIRE

    Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando

    2013-01-01

    Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...

  17. Using principal component analysis to capture individual differences within a unified neuropsychological model of chronic post-stroke aphasia: Revealing the unique neural correlates of speech fluency, phonology and semantics.

    Science.gov (United States)

    Halai, Ajay D; Woollams, Anna M; Lambon Ralph, Matthew A

    2017-01-01

    Individual differences in the performance profiles of neuropsychologically-impaired patients are pervasive yet there is still no resolution on the best way to model and account for the variation in their behavioural impairments and the associated neural correlates. To date, researchers have generally taken one of three different approaches: a single-case study methodology in which each case is considered separately; a case-series design in which all individual patients from a small coherent group are examined and directly compared; or, group studies, in which a sample of cases are investigated as one group with the assumption that they are drawn from a homogenous category and that performance differences are of no interest. In recent research, we have developed a complementary alternative through the use of principal component analysis (PCA) of individual data from large patient cohorts. This data-driven approach not only generates a single unified model for the group as a whole (expressed in terms of the emergent principal components) but is also able to capture the individual differences between patients (in terms of their relative positions along the principal behavioural axes). We demonstrate the use of this approach by considering speech fluency, phonology and semantics in aphasia diagnosis and classification, as well as their unique neural correlates. PCA of the behavioural data from 31 patients with chronic post-stroke aphasia resulted in four statistically-independent behavioural components reflecting phonological, semantic, executive-cognitive and fluency abilities. Even after accounting for lesion volume, entering the four behavioural components simultaneously into a voxel-based correlational methodology (VBCM) analysis revealed that speech fluency (speech quanta) was uniquely correlated with left motor cortex and underlying white matter (including the anterior section of the arcuate fasciculus and the frontal aslant tract), phonological skills with

  18. Exploring a minimal two-component p53 model

    International Nuclear Information System (INIS)

    Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei

    2010-01-01

    The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks

  19. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    International Nuclear Information System (INIS)

    Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie

    2013-01-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes

  20. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  1. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  2. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  3. Components in models of learning: Different operationalisations and relations between components

    Directory of Open Access Journals (Sweden)

    Mirkov Snežana

    2013-01-01

    Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu

  4. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  5. Models for integrated components coupled with their EM environment

    NARCIS (Netherlands)

    Ioan, D.; Schilders, W.H.A.; Ciuprina, G.; Meijs, van der N.P.; Schoenmaker, W.

    2008-01-01

    Abstract: Purpose – The main aim of this study is the modelling of the interaction of on-chip components with their electromagnetic environment. Design/methodology/approach – The integrated circuit is decomposed in passive and active components interconnected by means of terminals and connectors

  6. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  7. Decomposing the heterogeneity of depression at the person-, symptom-, and time-level : Latent variable models versus multimode principal component analysis

    NARCIS (Netherlands)

    de Vos, Stijn; Wardenaar, Klaas J.; Bos, Elisabeth H.; Wit, Ernst C.; de Jonge, Peter

    2015-01-01

    Background: Heterogeneity of psychopathological concepts such as depression hampers progress in research and clinical practice. Latent Variable Models (LVMs) have been widely used to reduce this problem by identification of more homogeneous factors or subgroups. However, heterogeneity exists at

  8. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  9. Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  10. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S; Elwardany, Ahmed E; Heikal, Morgan R

    2015-01-01

    numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model

  11. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  12. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  13. Sensitivity analysis on the component cooling system of the Angra 1 NPP

    International Nuclear Information System (INIS)

    Castro Silva, Luiz Euripedes Massiere de

    1995-01-01

    The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs

  14. Understanding science teacher enhancement programs: Essential components and a model

    Science.gov (United States)

    Spiegel, Samuel Albert

    Researchers and practioners alike recognize that "the national goal that every child in the United States has access to high-quality school education in science and mathematics cannot be realized without the availability of effective professional development of teachers" (Hewson, 1997, p. 16). Further, there is a plethora of reports calling for the improvement of professional development efforts (Guskey & Huberman, 1995; Kyle, 1995; Loucks-Horsley, Hewson, Love, & Stiles, 1997). In this study I analyze a successful 3-year teacher enhancement program, one form of professional development, to: (1) identify essential components of an effective teacher enhancement program; and (2) create a model to identify and articulate the critical issues in designing, implementing, and evaluating teacher enhancement programs. Five primary sources of information were converted into data: (1) exit questionnaires, (2) exit surveys, (3) exit interview transcripts, (4) focus group transcripts, and (5) other artifacts. Additionally, a focus group was used to conduct member checks. Data were analyzed in an iterative process which led to the development of the list of essential components. The Components are categorized by three organizers: Structure (e.g., science research experience, a mediator throughout the program), Context (e.g., intensity, collaboration), and Participant Interpretation (e.g., perceived to be "safe" to examine personal beliefs and practices, actively engaged). The model is based on: (1) a 4-year study of a successful teacher enhancement program; (2) an analysis of professional development efforts reported in the literature; and (3) reflective discussions with implementors, evaluators, and participants of professional development programs. The model consists of three perspectives, cognitive, symbolic interaction, and organizational, representing different viewpoints from which to consider issues relevant to the success of a teacher enhancement program. These

  15. Principal component analysis of 1/fα noise

    International Nuclear Information System (INIS)

    Gao, J.B.; Cao Yinhe; Lee, J.-M.

    2003-01-01

    Principal component analysis (PCA) is a popular data analysis method. One of the motivations for using PCA in practice is to reduce the dimension of the original data by projecting the raw data onto a few dominant eigenvectors with large variance (energy). Due to the ubiquity of 1/f α noise in science and engineering, in this Letter we study the prototypical stochastic model for 1/f α processes--the fractional Brownian motion (fBm) processes using PCA, and find that the eigenvalues from PCA of fBm processes follow a power-law, with the exponent being the key parameter defining the fBm processes. We also study random-walk-type processes constructed from DNA sequences, and find that the eigenvalue spectrum from PCA of those random-walk processes also follow power-law relations, with the exponent characterizing the correlation structures of the DNA sequence. In fact, it is observed that PCA can automatically remove linear trends induced by patchiness in the DNA sequence, hence, PCA has a similar capability to the detrended fluctuation analysis. Implications of the power-law distributed eigenvalue spectrum are discussed

  16. Independent component and pathway-based analysis of miRNA-regulated gene expression in a model of type 1 diabetes

    DEFF Research Database (Denmark)

    Bang-Berthelsen, Claus Heiner; Pedersen, Lykke; Fløyel, Tina

    2011-01-01

    enrichment of sequence predicted targets, compared to only four miRNAs when using simple negative correlation. The ICs were enriched for miRNA targets that function in diabetes-relevant pathways e.g. type 1 and type 2 diabetes and maturity onset diabetes of the young (MODY). CONCLUSIONS: In this study, ICA...... (ICA). Here, we developed a novel target prediction method based on ICA that incorporates both seed matching and expression profiling of miRNA and mRNA expressions. The method was applied on a cellular model of type 1 diabetes. RESULTS: Microrray profiling identified eight miRNAs (miR-124...... between the predicted miRNA targets. Applying the method on a model of type 1 diabetes resulted in identification of eight miRNAs that appear to affect pathways of relevance to disease mechanisms in diabetes....

  17. 3-D thermal analysis using finite difference technique with finite element model for improved design of components of rocket engine turbomachines for Space Shuttle Main Engine SSME

    Science.gov (United States)

    Sohn, Kiho D.; Ip, Shek-Se P.

    1988-01-01

    Three-dimensional finite element models were generated and transferred into three-dimensional finite difference models to perform transient thermal analyses for the SSME high pressure fuel turbopump's first stage nozzles and rotor blades. STANCOOL was chosen to calculate the heat transfer characteristics (HTCs) around the airfoils, and endwall effects were included at the intersections of the airfoils and platforms for the steady-state boundary conditions. Free and forced convection due to rotation effects were also considered in hollow cores. Transient HTCs were calculated by taking ratios of the steady-state values based on the flow rates and fluid properties calculated at each time slice. Results are presented for both transient plots and three-dimensional color contour isotherm plots; they were also converted into universal files to be used for FEM stress analyses.

  18. System diagnostics using qualitative analysis and component functional classification

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures

  19. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  20. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  1. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  2. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    Energy Technology Data Exchange (ETDEWEB)

    Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  3. Modelling and development of an analysis and decision component for a user dependent impact of efficiency; Modellierung und Entwicklung einer Analyse- und Entscheidungskomponente fuer nutzerabhaengige Effizienzbeeinflussung

    Energy Technology Data Exchange (ETDEWEB)

    Jahn, Franziska [Westsaechsische Hochschule Zwickau (Germany)

    2010-07-01

    The contribution under consideration reports on a solution of the software-based expansion of an existing Smart Home System (Gira FacilityServer) for the user-dependent influence on efficiency. The first section of this contribution describes the reasons for such a system and explains at which position the Gira FacilityServer is to be expanded. Then the necessary individual operations and applied methods are illustrated which preferably are prerequisite for modeling and developing of such a system. The third section presents the current results from the concept to the software technological realization with regard to the project progress. Afterwards, the specific project results are summarized, The sustainability of these projectresults are considered.

  4. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  5. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  6. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  7. Two-component network model in voice identification technologies

    Directory of Open Access Journals (Sweden)

    Edita K. Kuular

    2018-03-01

    Full Text Available Among the most important parameters of biometric systems with voice modalities that determine their effectiveness, along with reliability and noise immunity, a speed of identification and verification of a person has been accentuated. This parameter is especially sensitive while processing large-scale voice databases in real time regime. Many research studies in this area are aimed at developing new and improving existing algorithms for presentation and processing voice records to ensure high performance of voice biometric systems. Here, it seems promising to apply a modern approach, which is based on complex network platform for solving complex massive problems with a large number of elements and taking into account their interrelationships. Thus, there are known some works which while solving problems of analysis and recognition of faces from photographs, transform images into complex networks for their subsequent processing by standard techniques. One of the first applications of complex networks to sound series (musical and speech analysis are description of frequency characteristics by constructing network models - converting the series into networks. On the network ontology platform a previously proposed technique of audio information representation aimed on its automatic analysis and speaker recognition has been developed. This implies converting information into the form of associative semantic (cognitive network structure with amplitude and frequency components both. Two speaker exemplars have been recorded and transformed into pertinent networks with consequent comparison of their topological metrics. The set of topological metrics for each of network models (amplitude and frequency one is a vector, and together  those combine a matrix, as a digital "network" voiceprint. The proposed network approach, with its sensitivity to personal conditions-physiological, psychological, emotional, might be useful not only for person identification

  8. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  9. Numerical analysis of magnetoelastic coupled buckling of fusion reactor components

    International Nuclear Information System (INIS)

    Demachi, K.; Yoshida, Y.; Miya, K.

    1994-01-01

    For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated

  10. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  11. Option valuation with the simplified component GARCH model

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.

    We introduce the Simplified Component GARCH (SC-GARCH) option pricing model, show and discuss sufficient conditions for non-negativity of the conditional variance, apply it to low-frequency and high-frequency financial data, and consider the option valuation, comparing the model performance...

  12. Integrating environmental component models. Development of a software framework

    NARCIS (Netherlands)

    Schmitz, O.

    2014-01-01

    Integrated models consist of interacting component models that represent various natural and social systems. They are important tools to improve our understanding of environmental systems, to evaluate cause–effect relationships of human–natural interactions, and to forecast the behaviour of

  13. Multi-component separation and analysis of bat echolocation calls.

    Science.gov (United States)

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  14. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  15. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  16. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  17. Analysis of Moisture Content in Beetroot using Fourier Transform Infrared Spectroscopy and by Principal Component Analysis.

    Science.gov (United States)

    Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah

    2018-05-22

    The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.

  18. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  19. Automatic ECG analysis using principal component analysis and wavelet transformation

    OpenAIRE

    Khawaja, Antoun

    2007-01-01

    The main objective of this book is to analyse and detect small changes in ECG waves and complexes that indicate cardiac diseases and disorders. Detecting predisposition to Torsade de Points (TDP) by analysing the beat-to-beat variability in T wave morphology is the main core of this work. The second main topic is detecting small changes in QRS complex and predicting future QRS complexes of patients. Moreover, the last main topic is clustering similar ECG components in different groups.

  20. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  1. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  2. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    .) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...

  3. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  4. Analysis methods for structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Sievers, J.

    2004-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  5. The analysis of multivariate group differences using common principal components

    NARCIS (Netherlands)

    Bechger, T.M.; Blanca, M.J.; Maris, G.

    2014-01-01

    Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences

  6. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  7. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  8. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  9. Adaptive tools in virtual environments: Independent component analysis for multimedia

    DEFF Research Database (Denmark)

    Kolenda, Thomas

    2002-01-01

    The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....

  10. Research on criticality analysis method of CNC machine tools components under fault rate correlation

    Science.gov (United States)

    Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han

    2018-02-01

    In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.

  11. A meta-analysis of executive components of working memory.

    Science.gov (United States)

    Nee, Derek Evan; Brown, Joshua W; Askren, Mary K; Berman, Marc G; Demiralp, Emre; Krawitz, Adam; Jonides, John

    2013-02-01

    Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM "work." However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.

  12. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  13. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  14. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  15. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim

    2016-01-01

    to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks

  16. Modeling the evaporation of sessile multi-component droplets

    NARCIS (Netherlands)

    Diddens, C.; Kuerten, Johannes G.M.; van der Geld, C.W.M.; Wijshoff, H.M.A.

    2017-01-01

    We extended a mathematical model for the drying of sessile droplets, based on the lubrication approximation, to binary mixture droplets. This extension is relevant for e.g. inkjet printing applications, where ink consisting of several components are used. The extension involves the generalization of

  17. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  18. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  19. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  20. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  1. Models of Economic Analysis

    OpenAIRE

    Adrian Ioana; Tiberiu Socaciu

    2013-01-01

    The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...

  2. Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation

    Directory of Open Access Journals (Sweden)

    Chunbo Yang

    2015-01-01

    Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.

  3. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  4. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  5. Intercity Travel Demand Analysis Model

    OpenAIRE

    Ming Lu; Hai Zhu; Xia Luo; Lei Lei

    2014-01-01

    It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model...

  6. Data and information needs for WPP testing and component modeling

    International Nuclear Information System (INIS)

    Kuhn, W.L.

    1987-01-01

    The modeling task of the Waste Package Program (WPP) is to develop conceptual models that describe the interactions of waste package components with their environment and the interactions among waste package components. The task includes development and maintenance of a database of experimental data, and statistical analyses to fit model coefficients, test the significance of the fits, and propose experimental designs. The modeling task collaborates with experimentalists to apply physicochemical principles to develop the conceptual models, with emphasis on the subsequent mathematical development. The reason for including the modeling task in the predominantly experimental WPP is to keep the modeling of component behavior closely associated with the experimentation. Whenever possible, waste package degradation processes are described in terms of chemical reactions or transport processes. The integration of equations for assumed or calculated repository conditions predicts variations with time in the repository. Within the context of the waste package program, the composition and rate of arrival of brine to the waste package are environmental variables. These define the environment to be simulated or explored during waste package component and interactions testing. The containment period is characterized by rapid changes in temperature, pressure, oxygen fugacity, and salt porosity. Brine migration is expected to be most rapid during this period. The release period is characterized by modest and slowly changing temperatures, high pressure, low oxygen fugacity, and low porosity. The need is to define the scenario within which waste package degradation calculations are to be made and to quantify the rate of arrival and composition of the brine. Appendix contains 4 vugraphs

  7. Independent component analysis of edge information for face recognition

    CERN Document Server

    Karande, Kailash Jagannath

    2013-01-01

    The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos

  8. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  9. Principal component analysis applied to Fourier transform infrared spectroscopy for the design of calibration sets for glycerol prediction models in wine and for the detection and classification of outlier samples.

    Science.gov (United States)

    Nieuwoudt, Helene H; Prior, Bernard A; Pretorius, Isak S; Manley, Marena; Bauer, Florian F

    2004-06-16

    Principal component analysis (PCA) was used to identify the main sources of variation in the Fourier transform infrared (FT-IR) spectra of 329 wines of various styles. The FT-IR spectra were gathered using a specialized WineScan instrument. The main sources of variation included the reducing sugar and alcohol content of the samples, as well as the stage of fermentation and the maturation period of the wines. The implications of the variation between the different wine styles for the design of calibration models with accurate predictive abilities were investigated using glycerol calibration in wine as a model system. PCA enabled the identification and interpretation of samples that were poorly predicted by the calibration models, as well as the detection of individual samples in the sample set that had atypical spectra (i.e., outlier samples). The Soft Independent Modeling of Class Analogy (SIMCA) approach was used to establish a model for the classification of the outlier samples. A glycerol calibration for wine was developed (reducing sugar content 8% v/v) with satisfactory predictive ability (SEP = 0.40 g/L). The RPD value (ratio of the standard deviation of the data to the standard error of prediction) was 5.6, indicating that the calibration is suitable for quantification purposes. A calibration for glycerol in special late harvest and noble late harvest wines (RS 31-147 g/L, alcohol > 11.6% v/v) with a prediction error SECV = 0.65 g/L, was also established. This study yielded an analytical strategy that combined the careful design of calibration sets with measures that facilitated the early detection and interpretation of poorly predicted samples and outlier samples in a sample set. The strategy provided a powerful means of quality control, which is necessary for the generation of accurate prediction data and therefore for the successful implementation of FT-IR in the routine analytical laboratory.

  10. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  11. Components of Program for Analysis of Spectra and Their Testing

    Directory of Open Access Journals (Sweden)

    Ivan Taufer

    2013-11-01

    Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.

  12. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  13. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  15. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  16. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  17. Evaluation of the RELAP5/MOD3 multidimensional component model

    International Nuclear Information System (INIS)

    Tomlinson, E.T.; Rens, T.E.; Coffield, R.D.

    1994-01-01

    Accurate plenum predictions, which are directly related to the mixing models used, are an important plant modeling consideration because of the consequential impact on basic transient performance calculations for the integrated system. The effect of plenum is a time shift between inlet and outlet temperature changes to the particular volume. Perfect mixing, where the total volume interacts instantaneously with the total inlet flow, does not occur because of effects such as inlet/outlet nozzle jetting, flow stratification, nested vortices within the volume and the general three-dimensional velocity distribution of the flow field. The time lag which exists between the inlet and outlet flows impacts the predicted rate of temperature change experienced by various plant system components and this impacts local component analyses which are affected by the rate of temperature change. This study includes a comparison of two-dimensional plenum mixing predictions using CFD-FLOW3D, RELAP5/MOD3 and perfect mixing models. Three different geometries (flat, square and tall) are assessed for scalar transport times using a wide range of inlet velocity and isothermal conditions. In addition, the three geometries were evaluated for low flow conditions with the inlet flow experiencing a large step temperature decrease. A major conclusion from this study is that the RELAP5/MOD3 multidimensional component model appears to be adequately predicting plenum mixing for a wide range of thermal-hydraulic conditions representative of plant transients

  18. Evaluating fugacity models for trace components in landfill gas

    Energy Technology Data Exchange (ETDEWEB)

    Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk

    2006-12-15

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.

  19. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  20. Constrained independent component analysis approach to nonobtrusive pulse rate measurements

    Science.gov (United States)

    Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.

    2012-07-01

    Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.

  1. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  2. Research on Air Quality Evaluation based on Principal Component Analysis

    Science.gov (United States)

    Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan

    2018-01-01

    Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.

  3. Analysis of the Components of Economic Potential of Agricultural Enterprises

    OpenAIRE

    Vyacheslav Skobara; Volodymyr Podkopaev

    2014-01-01

    Problems of efficiency of enterprises are increasingly associated with the use of the economic potential of the company. This article addresses the structural components of the economic potential of agricultural enterprise, development and substantiation of the model of economic potential with due account of the peculiarities of agricultural production. Based on the study of various approaches to the potential structure established is the definition of of production, labour, financial and man...

  4. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  5. Dynamic analysis of the radiolysis of binary component system

    International Nuclear Information System (INIS)

    Katayama, M.; Trumbore, C.N.

    1975-01-01

    Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)

  6. A component analysis of positive behaviour support plans.

    Science.gov (United States)

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  7. Representation for dialect recognition using topographic independent component analysis

    Science.gov (United States)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  8. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  9. Space-time latent component Modeling of Geo-referenced health data

    OpenAIRE

    Lawson, Andrew B.; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-01-01

    Latent structure models have been proposed in many applications. For space time health data it is often important to be able to find underlying trends in time which are supported by subsets of small areas. Latent structure modeling is one approach to this analysis. This paper presents a mixture-based approach that can be appied to component selction. The analysis of a Georgia ambulatory asthma county level data set is presented and a simulation-based evaluation is made.

  10. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  11. Integrative sparse principal component analysis of gene expression data.

    Science.gov (United States)

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  12. Probabilistic structural analysis of aerospace components using NESSUS

    Science.gov (United States)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  13. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  14. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  15. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  16. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  17. Integrated modelling of the edge plasma and plasma facing components

    International Nuclear Information System (INIS)

    Coster, D.P.; Bonnin, X.; Mutzke, A.; Schneider, R.; Warrier, M.

    2007-01-01

    Modelling of the interaction between the edge plasma and plasma facing components (PFCs) has tended to place more emphasis on either the plasma or the PFCs. Either the PFCs do not change with time and the plasma evolution is studied, or the plasma is assumed to remain static and the detailed interaction of the plasma and the PFCs are examined, with no back-reaction on the plasma taken into consideration. Recent changes to the edge simulation code, SOLPS, now allow for changes in both the plasma and the PFCs to be considered. This has been done by augmenting the code to track the time-development of the properties of plasma facing components (PFCs). Results of standard mixed-materials scenarios (base and redeposited C; Be) are presented

  18. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  19. Principal component analysis of FDG PET in amnestic MCI

    International Nuclear Information System (INIS)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco

    2008-01-01

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  20. Principal component analysis of FDG PET in amnestic MCI

    Energy Technology Data Exchange (ETDEWEB)

    Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)

    2008-12-15

    The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)

  1. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)

    1996-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  2. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  3. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  4. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  5. Fetal ECG extraction using independent component analysis by Jade approach

    Science.gov (United States)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  6. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)

    1997-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  7. Structure analysis of active components of traditional Chinese medicines

    DEFF Research Database (Denmark)

    Zhang, Wei; Sun, Qinglei; Liu, Jianhua

    2013-01-01

    Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... herbal sources by aqueous or alcoholic solvents. Therefore, it is difficult to correlate the pharmaceutical effect to a specific lead compound in the TCMs. A detailed analysis of various components in TCMs has been a great challenge for modern analytical techniques in recent decades. In this chapter...

  8. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    Science.gov (United States)

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  9. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  10. Thermal Analysis of Fermilab Mu2e Beamstop and Structural Analysis of Beamline Components

    Energy Technology Data Exchange (ETDEWEB)

    Narug, Colin S. [Northern Illinois U.

    2018-01-01

    The Mu2e project at Fermilab National Accelerator Laboratory aims to observe the unique conversion of muons to electrons. The success or failure of the experiment to observe this conversion will further the understanding of the standard model of physics. Using the particle accelerator, protons will be accelerated and sent to the Mu2e experiment, which will separate the muons from the beam. The muons will then be observed to determine their momentum and the particle interactions occur. At the end of the Detector Solenoid, the internal components will need to absorb the remaining particles of the experiment using polymer absorbers. Because the internal structure of the beamline is in a vacuum, the heat transfer mechanisms that can disperse the energy generated by the particle absorption is limited to conduction and radiation. To determine the extent that the absorbers will heat up over one year of operation, a transient thermal finite element analysis has been performed on the Muon Beam Stop. The levels of energy absorption were adjusted to determine the thermal limit for the current design. Structural finite element analysis has also been performed to determine the safety factors of the Axial Coupler, which connect and move segments of the beamline. The safety factor of the trunnion of the Instrument Feed Through Bulk Head has also been determined for when it is supporting the Muon Beam Stop. The results of the analysis further refine the design of the beamline components prior to testing, fabrication, and installation.

  11. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  12. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  13. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  14. Symmetrical components and power analysis for a two-phase microgrid system

    DEFF Research Database (Denmark)

    Alibeik, M.; Santos Jr., E. C. dos; Blaabjerg, Frede

    2014-01-01

    This paper presents a mathematical model for the symmetrical components and power analysis of a new microgrid system consisting of three wires and two voltages in quadrature, which is designated as a two-phase microgrid. The two-phase microgrid presents the following advantages: 1) constant power...

  15. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  16. Fast grasping of unknown objects using principal component analysis

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Wisse, Martijn

    2017-09-01

    Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.

  17. Multiobjective Optimization of ELID Grinding Process Using Grey Relational Analysis Coupled with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    S. Prabhu

    2014-06-01

    Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.

  18. Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis

    Science.gov (United States)

    Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi

    A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.

  19. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  20. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  1. Two-component scattering model and the electron density spectrum

    Science.gov (United States)

    Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.

    2010-02-01

    In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.

  2. A multi-component evaporation model for beam melting processes

    Science.gov (United States)

    Klassen, Alexander; Forster, Vera E.; Körner, Carolin

    2017-02-01

    In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.

  3. Reliability Analysis of Fatigue Failure of Cast Components for Wind Turbines

    Directory of Open Access Journals (Sweden)

    Hesam Mirzaei Rafsanjani

    2015-04-01

    Full Text Available Fatigue failure is one of the main failure modes for wind turbine drivetrain components made of cast iron. The wind turbine drivetrain consists of a variety of heavily loaded components, like the main shaft, the main bearings, the gearbox and the generator. The failure of each component will lead to substantial economic losses such as cost of lost energy production and cost of repairs. During the design lifetime, the drivetrain components are exposed to variable loads from winds and waves and other sources of loads that are uncertain and have to be modeled as stochastic variables. The types of loads are different for offshore and onshore wind turbines. Moreover, uncertainties about the fatigue strength play an important role in modeling and assessment of the reliability of the components. In this paper, a generic stochastic model for fatigue failure of cast iron components based on fatigue test data and a limit state equation for fatigue failure based on the SN-curve approach and Miner’s rule is presented. The statistical analysis of the fatigue data is performed using the Maximum Likelihood Method which also gives an estimate of the statistical uncertainties. Finally, illustrative examples are presented with reliability analyses depending on various stochastic models and partial safety factors.

  4. Modeling for thermodynamic activities of components in simulated reprocessing solutions

    International Nuclear Information System (INIS)

    Sasahira, Akira; Hoshikawa, Tadahiro; Kawamura, Fumio

    1992-01-01

    Analyses of chemical reactions have been widely carried out for soluble fission products encountered in nuclear fuel reprocessing. For detailed analyses of reactions, a prediction of the activity or activity coefficient for nitric acid, water, and several nitrates of fission products is needed. An idea for the predicted nitric acid activity was presented earlier. The model, designated the hydration model, does not predict the nitrate activity. It did, however, suggest that the activity of water would be a function of nitric acid activity but not the molar fraction of water. If the activities of nitric acid and water are accurately predicted, the activity of the last component, nitrate, can be calculated using the Gibbs-Duhem relation for chemical potentials. Therefore, in this study, the earlier hydration model was modified to evaluate the water activity more accurately. The modified model was experimentally examined in stimulated reprocessing solutions. It is concluded that the modified model was suitable for water activity, but further improvement was needed for the activity evaluation of nitric acid in order to calculate the nitrate activity

  5. Failure analysis a practical guide for manufacturers of electronic components and systems

    CERN Document Server

    Bâzu, Marius

    2011-01-01

    Failure analysis is the preferred method to investigate product or process reliability and to ensure optimum performance of electrical components and systems. The physics-of-failure approach is the only internationally accepted solution for continuously improving the reliability of materials, devices and processes. The models have been developed from the physical and chemical phenomena that are responsible for degradation or failure of electronic components and materials and now replace popular distribution models for failure mechanisms such as Weibull or lognormal. Reliability engineers nee

  6. COMPONENT SUPPLY MODEL FOR REPAIR ACTIVITIES NETWORK UNDER CONDITIONS OF PROBABILISTIC INDEFINITENESS.

    Directory of Open Access Journals (Sweden)

    Victor Yurievich Stroganov

    2017-02-01

    Full Text Available This article contains the systematization of the major production functions of repair activities network and the list of planning and control functions, which are described in the form of business processes (BP. Simulation model for analysis of the delivery effectiveness of components under conditions of probabilistic uncertainty was proposed. It has been shown that a significant portion of the total number of business processes is represented by the management and planning of the parts and components movement. Questions of construction of experimental design techniques on the simulation model in the conditions of non-stationarity were considered.

  7. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    Energy Technology Data Exchange (ETDEWEB)

    Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)

    2016-02-20

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.

  8. PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS

    International Nuclear Information System (INIS)

    Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.

    2016-01-01

    In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information

  9. SASSYS-1 balance-of-plant component models for an integrated plant response

    International Nuclear Information System (INIS)

    Ku, J.-Y.

    1989-01-01

    Models of power plant heat transfer components and rotating machinery have been added to the balance-of-plant model in the SASSYS-1 liquid metal reactor systems analysis code. This work is part of a continuing effort in plant network simulation based on the general mathematical models developed. The models described in this paper extend the scope of the balance-of-plant model to handle non-adiabatic conditions along flow paths. While the mass and momentum equations remain the same, the energy equation now contains a heat source term due to energy transfer across the flow boundary or to work done through a shaft. The heat source term is treated fully explicitly. In addition, the equation of state is rewritten in terms of the quality and separate parameters for each phase. The models are simple enough to run quickly, yet include sufficient detail of dominant plant component characteristics to provide accurate results. 5 refs., 16 figs., 2 tabs

  10. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  11. Failure cause analysis and improvement for magnetic component cabinet

    International Nuclear Information System (INIS)

    Ge Bing

    1999-01-01

    The magnetic component cabinet is an important thermal control device fitted on the nuclear power. Because it used a self-saturation amplifier as a primary component, the magnetic component cabinet has some boundness. For increasing the operation safety on the nuclear power, the author describes a new scheme. In order that the magnetic component cabinet can be replaced, the new type component cabinet is developed. Integrate circuit will replace the magnetic components of every function parts. The author has analyzed overall failure cause for magnetic component cabinet and adopted some measures

  12. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  13. Component simulation in problems of calculated model formation of automatic machine mechanisms

    OpenAIRE

    Telegin Igor; Kozlov Alexander; Zhirkov Alexander

    2017-01-01

    The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...

  14. Development of the interactive model between Component Cooling Water System and Containment Cooling System using GOTHIC

    International Nuclear Information System (INIS)

    Byun, Choong Sup; Song, Dong Soo; Jun, Hwang Yong

    2006-01-01

    In a design point of view, component cooling water (CCW) system is not full-interactively designed with its heat loads. Heat loads are calculated from the CCW design flow and temperature condition which is determined with conservatism. Then the CCW heat exchanger is sized by using total maximized heat loads from above calculation. This approach does not give the optimized performance results and the exact trends of CCW system and the loads during transient. Therefore a combined model for performance analysis of containment and the component cooling water(CCW) system is developed by using GOTHIC software code. The model is verified by using the design parameters of component cooling water heat exchanger and the heat loads during the recirculation mode of loss of coolant accident scenario. This model may be used for calculating the realistic containment response and CCW performance, and increasing the ultimate heat sink temperature limits

  15. Modeling Organic Contaminant Desorption from Municipal Solid Waste Components

    Science.gov (United States)

    Knappe, D. R.; Wu, B.; Barlaz, M. A.

    2002-12-01

    Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a

  16. Multi-component controllers in reactor physics optimality analysis

    International Nuclear Information System (INIS)

    Aldemir, T.

    1978-01-01

    An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux

  17. Improvement of retinal blood vessel detection using morphological component analysis.

    Science.gov (United States)

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Analysis of tangible and intangible hotel service quality components

    Directory of Open Access Journals (Sweden)

    Marić Dražen

    2016-01-01

    Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.

  19. Modeling photoionization of aqueous DNA and its components.

    Science.gov (United States)

    Pluhařová, Eva; Slavíček, Petr; Jungwirth, Pavel

    2015-05-19

    Radiation damage to DNA is usually considered in terms of UVA and UVB radiation. These ultraviolet rays, which are part of the solar spectrum, can indeed cause chemical lesions in DNA, triggered by photoexcitation particularly in the UVB range. Damage can, however, be also caused by higher energy radiation, which can ionize directly the DNA or its immediate surroundings, leading to indirect damage. Thanks to absorption in the atmosphere, the intensity of such ionizing radiation is negligible in the solar spectrum at the surface of Earth. Nevertheless, such an ionizing scenario can become dangerously plausible for astronauts or flight personnel, as well as for persons present at nuclear power plant accidents. On the beneficial side, ionizing radiation is employed as means for destroying the DNA of cancer cells during radiation therapy. Quantitative information about ionization of DNA and its components is important not only for DNA radiation damage, but also for understanding redox properties of DNA in redox sensing or labeling, as well as charge migration along the double helix in nanoelectronics applications. Until recently, the vast majority of experimental and computational data on DNA ionization was pertinent to its components in the gas phase, which is far from its native aqueous environment. The situation has, however, changed for the better due to the advent of photoelectron spectroscopy in liquid microjets and its most recent application to photoionization of aqueous nucleosides, nucleotides, and larger DNA fragments. Here, we present a consistent and efficient computational methodology, which allows to accurately evaluate ionization energies and model photoelectron spectra of aqueous DNA and its individual components. After careful benchmarking, the method based on density functional theory and its time-dependent variant with properly chosen hybrid functionals and polarizable continuum solvent model provides ionization energies with accuracy of 0.2-0.3 e

  20. Analysis of European Union Economy in Terms of GDP Components

    Directory of Open Access Journals (Sweden)

    Simona VINEREAN

    2013-12-01

    Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.

  1. Surface composition of biomedical components by ion beam analysis

    International Nuclear Information System (INIS)

    Kenny, M.J.; Wielunski, L.S.; Baxter, G.R.

    1991-01-01

    Materials used for replacement body parts must satisfy a number of requirements such as biocompatibility and mechanical ability to handle the task with regard to strength, wear and durability. When using a CVD coated carbon fibre reinforced carbon ball, the surface must be ion implanted with uniform dose of nitrogen ions in order to make it wear resistant. The mechanism by which the wear resistance is improved is one of radiation damage and the required dose of about 10 16 cm -2 can have a tolerance of about 20%. To implant a spherical surface requires manipulation of the sample within the beam and control system (either computer or manually operated) to enable uniform dose all the way from polar to equatorial regions on the surface. A manipulator has been designed and built for this purpose. In order to establish whether the dose is uniform, nuclear reaction analysis using the reaction 14 N(d,α) 12 C is an ideal method of profiling. By taking measurements at a number of points on the surface, the uniformity of nitrogen dose can be ascertained. It is concluded that both Rutherford Backscattering and Nuclear Reaction Analysis can be used for rapid analysis of surface composition of carbon based materials used for replacement body components. 2 refs., 2 figs

  2. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  3. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    Science.gov (United States)

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  4. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  5. Size distribution measurements and chemical analysis of aerosol components

    Energy Technology Data Exchange (ETDEWEB)

    Pakkanen, T.A.

    1995-12-31

    The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted

  6. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  7. Layout Optimization Model for the Production Planning of Precast Concrete Building Components

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2018-05-01

    Full Text Available Precast concrete comprises the basic components of modular buildings. The efficiency of precast concrete building component production directly impacts the construction time and cost. In the processes of precast component production, mold setting has a significant influence on the production efficiency and cost, as well as reducing the resource consumption. However, the development of mold setting plans is left to the experience of production staff, with outcomes dependent on the quality of human skill and experience available. This can result in sub-optimal production efficiencies and resource wastage. Accordingly, in order to improve the efficiency of precast component production, this paper proposes an optimization model able to maximize the average utilization rate of pallets used during the molding process. The constraints considered were the order demand, the size of the pallet, layout methods, and the positional relationship of components. A heuristic algorithm was used to identify optimization solutions provided by the model. Through empirical analysis, and as exemplified in the case study, this research is significant in offering a prefabrication production planning model which improves pallet utilization rates, shortens component production time, reduces production costs, and improves the resource utilization. The results clearly demonstrate that the proposed method can facilitate the precast production plan providing strong practical implications for production planners.

  8. On combined gravity gradient components modelling for applied geophysics

    International Nuclear Information System (INIS)

    Veryaskin, Alexey; McRae, Wayne

    2008-01-01

    Gravity gradiometry research and development has intensified in recent years to the extent that technologies providing a resolution of about 1 eotvos per 1 second average shall likely soon be available for multiple critical applications such as natural resources exploration, oil reservoir monitoring and defence establishment. Much of the content of this paper was composed a decade ago, and only minor modifications were required for the conclusions to be just as applicable today. In this paper we demonstrate how gravity gradient data can be modelled, and show some examples of how gravity gradient data can be combined in order to extract valuable information. In particular, this study demonstrates the importance of two gravity gradient components, Txz and Tyz, which, when processed together, can provide more information on subsurface density contrasts than that derived solely from the vertical gravity gradient (Tzz)

  9. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  10. Component vibration of VVER-reactors - diagnostics and modelling

    International Nuclear Information System (INIS)

    Altstadt, E.; Scheffler, M.; Weiss, F.-P.

    1995-01-01

    Flow induced vibrations of reactor pressure vessel (RPV) internals (control element and core barrel motions) at VVER-440 reactors have led to the development of dedicated methods for on-line monitoring. These methods need a certain developed stage of the faults to be detected. To achieve a real sensitive early detection of mechanical faults of RPV internals, a theoretical vibration model was developed based on finite elements. The model comprises the whole primary circuit including the steam generators (SG). By means of that model all eigenfrequencies up to 30 Hz and the corresponding mode shapes were calculated for the normal vibration behaviour. Moreover the shift of eigenfrequencies and of amplitudes due to the degradation or to the failure of internal clamping and spring elements could be investigated, showing that a recognition of such degradations even inside the RPV is possible by pure excore vibration measurements. A true diagnostic, that is the identification of the failed component, might become possible because different faults influence different and well separated eigenfrequencies. (author)

  11. Detailed Structural Analysis of Critical Wendelstein 7-X Magnet System Components

    International Nuclear Information System (INIS)

    Egorov, K.

    2006-01-01

    The Wendelstein 7-X (W7-X) stellarator experiment is presently under construction and assembly in Greifswald, Germany. The goal of the experiment is to verify that the stellarator magnetic confinement concept is a viable option for a fusion reactor. The complex W7-X magnet system requires a multi-level approach to structural analysis for which two types of finite element models are used: Firstly, global models having reasonably coarse meshes with a number of simplifications and assumptions, and secondly, local models with detailed meshes of critical regions and elements. Widely known sub-modelling technique with boundary conditions extracted from the global models is one of the approaches for local analysis with high assessment efficiency. In particular, the winding pack (WP) of the magnet coils is simulated in the global model as a homogeneous orthotropic material with effective mechanical characteristic representing its real composite structure. This assumption allows assessing the whole magnet system in terms of general structural factors like forces and moments on the support elements, displacements of the main components, deformation and stress in the coil casings, etc. In a second step local models with a detailed description of more critical WP zones are considered in order to analyze their internal components like conductor jackets, turn insulation, etc. This paper provides an overview of local analyses of several critical W7-X magnet system components with particular attention on the coil winding packs. (author)

  12. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  13. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    HRISTIAN Liliana

    2017-05-01

    Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.

  14. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  15. Assessing prescription drug abuse using functional principal component analysis (FPCA) of wastewater data.

    Science.gov (United States)

    Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G

    2017-03-01

    Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  17. A Principal Component Analysis of 39 Scientific Impact Measures

    Science.gov (United States)

    Bollen, Johan; Van de Sompel, Herbert

    2009-01-01

    Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078

  18. Analysis of contaminants on electronic components by reflectance FTIR spectroscopy

    International Nuclear Information System (INIS)

    Griffith, G.W.

    1982-09-01

    The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included

  19. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  20. A principal component analysis of 39 scientific impact measures.

    Directory of Open Access Journals (Sweden)

    Johan Bollen

    Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.

  1. Sensitivity Analysis on Elbow Piping Components in Seismically Isolated NPP under Seismic Loading

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Hee Kun; Hahm, Dae Gi; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Jeon, Bub Gyu; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    In this study, the FE model is verified using specimen test results and simulation with parameter variations are conducted. Effective parameters will randomly sampled and used as input values for simulations to be applied to the fragility analysis. pipelines are representative of them because they could undergo larger displacements when they are supported on both isolated and non-isolated structures simultaneously. Especially elbows are critical components of pipes under severed loading conditions such as earthquake action because strain is accumulated on them during the repeated bending of the pipe. Therefore, seismic performance of pipe elbow components should be examined thoroughly based on the fragility analysis. Fragility assessment of interface pipe should take different sources of uncertainty into account. However, selection of important sources and repeated tests with many random input values are very time consuming and expensive, so numerical analysis is commonly used. In the present study, finite element (FE) model of elbow component will be validated using the dynamic test results of elbow components. Using the verified model, sensitivity analysis will be implemented as a preliminary process of seismic fragility of piping system. Several important input parameters are selected and how the uncertainty of them are apportioned to the uncertainty of the elbow response is to be studied. Piping elbows are critical components under cyclic loading conditions as they are subjected large displacement. In a seismically isolated NPP, seismic capacity of piping system should be evaluated with caution. Seismic fragility assessment preliminarily needs parameter sensitivity analysis about the output of interest with different input parameter values.

  2. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  3. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  4. Modeling of a remote inspection system for NSSS components

    International Nuclear Information System (INIS)

    Choi, Yoo Rark; Kim, Jae Hee; Lee, Jae Cheol

    2003-03-01

    Safety inspection for safety-critical unit of nuclear power plant has been processed using off-line technology. Thus we can not access safety inspection system and inspection data via network such as internet. We are making an on-line control and data access system based on WWW and JAVA technologies which can be used during plant operation to overcome these problems. Users can access inspection systems and inspection data only using web-browser. This report discusses about analysis of the existing remote system and essential techniques such as Web, JAVA, client/server model, and multi-tier model. This report also discusses about a system modeling that we have been developed using these techniques and provides solutions for developing an on-line control and data access system

  5. Stochastic Models of Defects in Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2013-01-01

    The drivetrain in a wind turbine nacelle typically consists of a variety of heavily loaded components, like the main shaft, bearings, gearbox and generator. The variations in environmental load challenge the performance of all the components of the drivetrain. Failure of each of these components...

  6. On the structure of dynamic principal component analysis used in statistical process monitoring

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne

    2017-01-01

    When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...

  7. Task-related component analysis for functional neuroimaging and application to near-infrared spectroscopy data.

    Science.gov (United States)

    Tanaka, Hirokazu; Katura, Takusige; Sato, Hiroki

    2013-01-01

    Reproducibility of experimental results lies at the heart of scientific disciplines. Here we propose a signal processing method that extracts task-related components by maximizing the reproducibility during task periods from neuroimaging data. Unlike hypothesis-driven methods such as general linear models, no specific time courses are presumed, and unlike data-driven approaches such as independent component analysis, no arbitrary interpretation of components is needed. Task-related components are constructed by a linear, weighted sum of multiple time courses, and its weights are optimized so as to maximize inter-block correlations (CorrMax) or covariances (CovMax). Our analysis method is referred to as task-related component analysis (TRCA). The covariance maximization is formulated as a Rayleigh-Ritz eigenvalue problem, and corresponding eigenvectors give candidates of task-related components. In addition, a systematic statistical test based on eigenvalues is proposed, so task-related and -unrelated components are classified objectively and automatically. The proposed test of statistical significance is found to be independent of the degree of autocorrelation in data if the task duration is sufficiently longer than the temporal scale of autocorrelation, so TRCA can be applied to data with autocorrelation without any modification. We demonstrate that simple extensions of TRCA can provide most distinctive signals for two tasks and can integrate multiple modalities of information to remove task-unrelated artifacts. TRCA was successfully applied to synthetic data as well as near-infrared spectroscopy (NIRS) data of finger tapping. There were two statistically significant task-related components; one was a hemodynamic response, and another was a piece-wise linear time course. In summary, we conclude that TRCA has a wide range of applications in multi-channel biophysical and behavioral measurements. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. State of the art seismic analysis for CANDU reactor structure components using condensation method

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, S A; Ibraham, A M; Hodgson, S [Atomic Energy of Canada Ltd., Saskatoon, SK (Canada)

    1996-12-31

    The reactor structure assembly seismic analysis is a relatively complex process because of the intricate geometry with many different discontinuities, and due to the hydraulic attached mass which follows the structure during its vibration. In order to simulate reasonably accurate behaviour of the reactor structure assembly, detailed finite element models are generated and used for both modal and stress analysis. Guyan reduction condensation method was used in the analysis. The attached mass, which includes the fluid mass contained in the components plus the added mass which accounts for the inertia of the surrounding fluid entrained by the accelerating structure immersed in the fluid, was calculated and attached to the vibrating structures. The masses of the attached components, supported partly or totally by the assembly which includes piping, reactivity control units, end fittings, etc. are also considered in the analysis. (author). 4 refs., 6 tabs., 4 figs.

  9. Modeling and validation of existing VAV system components

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada)

    2004-07-01

    The optimization of supervisory control strategies and local-loop controllers can improve the performance of HVAC (heating, ventilating, air-conditioning) systems. In this study, the component model of the fan, the damper and the cooling coil were developed and validated against monitored data of an existing variable air volume (VAV) system installed at Montreal's Ecole de Technologie Superieure. The measured variables that influence energy use in individual HVAC models included: (1) outdoor and return air temperature and relative humidity, (2) supply air and water temperatures, (3) zone airflow rates, (4) supply duct, outlet fan, mixing plenum static pressures, (5) fan speed, and (6) minimum and principal damper and cooling and heating coil valve positions. The additional variables that were considered, but not measured were: (1) fan and outdoor airflow rate, (2) inlet and outlet cooling coil relative humidity, and (3) liquid flow rate through the heating or cooling coils. The paper demonstrates the challenges of the validation process when monitored data of existing VAV systems are used. 7 refs., 11 figs.

  10. Failure characteristic analysis of a component on standby state

    International Nuclear Information System (INIS)

    Shin, Sungmin; Kang, Hyungook

    2013-01-01

    Periodic operations for a specific type of component, however, can accelerate aging effects which increase component unavailability. For the other type of components, the aging effect caused by operation can be ignored. Therefore frequent operations can decrease component unavailability. Thus, to get optimum unavailability proper operation period and method should be studied considering the failure characteristics of each component. The information of component failure is given according to the main causes of failure depending on time flow. However, to get the optimal unavailability, proper interval of operation for inspection should be decided considering the time dependent and independent causes together. According to this study, gradually shorter operation interval for inspection is better to get the optimal component unavailability than that of specific period

  11. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  12. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    ) Generation of derivative property models via linear coregionalization with porosity; (5) Post-processing of the simulated models to impart desired secondary geologic attributes and to create summary and uncertainty models; and (6) Conversion of the models into real-world coordinates. The conversion to real world coordinates is performed as part of the integration of the RPM into the Integrated Site Model (ISM) 3.1; this activity is not part of the current analysis. The ISM provides a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site and consists of three components: (1) Geologic Framework Model (GFM); (2) RPM, which is the subject of this AMR; and (3) Mineralogic Model. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. Figure 2 shows the geographic boundaries of the RPM and other component models of the ISM

  13. Towards automatic analysis of dynamic radionuclide studies using principal-components factor analysis

    International Nuclear Information System (INIS)

    Nigran, K.S.; Barber, D.C.

    1985-01-01

    A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)

  14. 30 seismic analysis of FBR vessels: Coupling between components and vessels, fluid communications, imperfections

    International Nuclear Information System (INIS)

    Gantenbein, F.; Gibert, R.J.; Aita, S.; Durandet, E.

    1988-01-01

    The internal structures of a loop type breeder reactors such as SUPERPHENIX are composed of axisymmetrical shells separated by fluid volumes. Seismic analysis is usually performed by axisymmetric finite element model taking into account fluid structure interaction but the geometry is in fact 3D due to components, small communications between fluid volumes and imperfections in the vessels. The methods to take this 3D behaviour into account are based on Fourier decomposition of the motion and substructuration. They are briefly described in the following chapters. The influence of components and of small communications on a block reactor similar to SPX1 will also be presented. 15 refs, 20 figs

  15. Space-time latent component modeling of geo-referenced health data.

    Science.gov (United States)

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  16. Individual-Level Concentrations of Fine Particulate Matter Chemical Components and Subclinical Atherosclerosis: A Cross-Sectional Analysis Based on 2 Advanced Exposure Prediction Models in the Multi-Ethnic Study of Atherosclerosis

    Science.gov (United States)

    Kim, Sun-Young; Sheppard, Lianne; Kaufman, Joel D.; Bergen, Silas; Szpiro, Adam A.; Larson, Timothy V.; Adar, Sara D.; Diez Roux, Ana V.; Polak, Joseph F.; Vedal, Sverre

    2014-01-01

    Long-term exposure to outdoor particulate matter with an aerodynamic diameter less than or equal to 2.5 µm (PM2.5) has been associated with cardiovascular morbidity and mortality. The chemical composition of PM2.5 that may be most responsible for producing these associations has not been identified. We assessed cross-sectional associations between long-term concentrations of PM2.5 and 4 of its chemical components (sulfur, silicon, elemental carbon, and organic carbon (OC)) and subclinical atherosclerosis, measured as carotid intima-media thickness (CIMT) and coronary artery calcium, between 2000 and 2002 among 5,488 Multi-Ethnic Study of Atherosclerosis participants residing in 6 US metropolitan areas. Long-term concentrations of PM2.5 components at participants' homes were predicted using both city-specific spatiotemporal models and a national spatial model. The estimated differences in CIMT associated with interquartile-range increases in sulfur, silicon, and OC predictions from the spatiotemporal model were 0.022 mm (95% confidence interval (CI): 0.014, 0.031), 0.006 mm (95% CI: 0.000, 0.012), and 0.026 mm (95% CI: 0.019, 0.034), respectively. Findings were generally similar using the national spatial model predictions but were often sensitive to adjustment for city. We did not find strong evidence of associations with coronary artery calcium. Long-term concentrations of sulfur and OC, and possibly silicon, were associated with CIMT using 2 distinct exposure prediction modeling approaches. PMID:25164422

  17. Development of guidelines for inelastic analysis in design of fast reactor components

    International Nuclear Information System (INIS)

    Nakamura, Kyotada; Kasahara, Naoto; Morishita, Masaki; Shibamoto, Hiroshi; Inoue, Kazuhiko; Nakayama, Yasunari

    2008-01-01

    The interim guidelines for the application of inelastic analysis to design of fast reactor components were developed. These guidelines are referred from 'Elevated Temperature Structural Design Guide for Commercialized Fast Reactor (FDS)'. The basic policies of the guidelines are more rational predictions compared with elastic analysis approach and a guarantee of conservative results for design conditions. The guidelines recommend two kinds of constitutive equations to estimate strains conservatively. They also provide the methods for modeling load histories and estimating fatigue and creep damage based on the results of inelastic analysis. The guidelines were applied to typical design examples and their results were summarized as exemplars to support users

  18. Major component analysis of dynamic networks of physiologic organ interactions

    International Nuclear Information System (INIS)

    Liu, Kang K L; Ma, Qianli D Y; Ivanov, Plamen Ch; Bartsch, Ronny P

    2015-01-01

    The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function. (paper)

  19. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  20. Finite element elastic-plastic analysis of LMFBR components

    International Nuclear Information System (INIS)

    Levy, A.; Pifko, A.; Armen, H. Jr.

    1978-01-01

    The present effort involves the development of computationally efficient finite element methods for accurately predicting the isothermal elastic-plastic three-dimensional response of thick and thin shell structures subjected to mechanical and thermal loads. This work will be used as the basis for further development of analytical tools to be used to verify the structural integrity of liquid metal fast breeder reactor (LMFBR) components. The methods presented here have been implemented into the three-dimensional solid element module (HEX) of the Grumman PLANS finite element program. These methods include the use of optimal stress points as well as a variable number of stress points within an element. This allows monitoring the stress history at many points within an element and hence provides an accurate representation of the elastic-plastic boundary using a minimum number of degrees of freedom. Also included is an improved thermal stress analysis capability in which the temperature variation and corresponding thermal strain variation are represented by the same functional form as the displacement variation. Various problems are used to demonstrate these improved capabilities. (Auth.)

  1. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  2. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches

    National Research Council Canada - National Science Library

    Qi, Yuan

    2000-01-01

    In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...

  3. Derivation of design response spectra for analysis and testing of components and systems

    International Nuclear Information System (INIS)

    Krutzik, N.

    1996-01-01

    Some institutions participating in the Benchmark Project performed parallel calculations for the WWER-1000 Kozloduy NPP. The investigations were based on various mathematical models and procedures for consideration of soil-structure interaction effects, simultaneously applying uniform soil dynamic and seismological input data. The methods, mathematical models and dynamic response results were evaluated and discussed in detail and finally compared by means of different structural models and soil representations with the aim of deriving final enveloped and smoothed dynamic response data (benchmark response spectra). This should be used for requalification by analysis testing of the mechanical and electrical components and systems located in this type of reactor building

  4. Nonlinear seismic analysis of a reactor structure with impact between core components

    International Nuclear Information System (INIS)

    Hill, R.G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-mass beam model of the FFTF which includes small clearances between core components is used as a ''driver'' for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed. 6 references

  5. Detailed measurements and modelling of thermo active components using a room size test facility

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... typically within 1-2K of the measured results. The simulation model, whose room model splits up the radiative and convective heat transfer between room and surfaces, can also be used to predict the dynamical conditions, where especially the temperature rise during the day is important for designing...

  6. Connected Component Model for Multi-Object Tracking.

    Science.gov (United States)

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  7. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  8. An application of principal component analysis to the clavicle and clavicle fixation devices.

    LENUS (Irish Health Repository)

    Daruwalla, Zubin J

    2010-01-01

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories.

  9. An application of principal component analysis to the clavicle and clavicle fixation devices

    OpenAIRE

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-01-01

    Abstract Background Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Materials and method...

  10. THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD

    OpenAIRE

    HRISTIAN Liliana; OSTAFE Maria Magdalena; BORDEIANU Demetra Lacramioara; APOSTOL Laura Liliana

    2017-01-01

    The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA). There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A s...

  11. Trimming of mammalian transcriptional networks using network component analysis

    Directory of Open Access Journals (Sweden)

    Liao James C

    2010-10-01

    Full Text Available Abstract Background Network Component Analysis (NCA has been used to deduce the activities of transcription factors (TFs from gene expression data and the TF-gene binding relationship. However, the TF-gene interaction varies in different environmental conditions and tissues, but such information is rarely available and cannot be predicted simply by motif analysis. Thus, it is beneficial to identify key TF-gene interactions under the experimental condition based on transcriptome data. Such information would be useful in identifying key regulatory pathways and gene markers of TFs in further studies. Results We developed an algorithm to trim network connectivity such that the important regulatory interactions between the TFs and the genes were retained and the regulatory signals were deduced. Theoretical studies demonstrated that the regulatory signals were accurately reconstructed even in the case where only three independent transcriptome datasets were available. At least 80% of the main target genes were correctly predicted in the extreme condition of high noise level and small number of datasets. Our algorithm was tested with transcriptome data taken from mice under rapamycin treatment. The initial network topology from the literature contains 70 TFs, 778 genes, and 1423 edges between the TFs and genes. Our method retained 1074 edges (i.e. 75% of the original edge number and identified 17 TFs as being significantly perturbed under the experimental condition. Twelve of these TFs are involved in MAPK signaling or myeloid leukemia pathways defined in the KEGG database, or are known to physically interact with each other. Additionally, four of these TFs, which are Hif1a, Cebpb, Nfkb1, and Atf1, are known targets of rapamycin. Furthermore, the trimmed network was able to predict Eno1 as an important target of Hif1a; this key interaction could not be detected without trimming the regulatory network. Conclusions The advantage of our new algorithm

  12. Analysis of Minor Component Segregation in Ternary Powder Mixtures

    Directory of Open Access Journals (Sweden)

    Asachi Maryam

    2017-01-01

    Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.

  13. F4E studies for the electromagnetic analysis of ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, P., E-mail: pietro.testoni@f4e.europa.eu [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Cau, F.; Portone, A. [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Albanese, R. [Associazione EURATOM/ENEA/CREATE, DIETI, Università Federico II di Napoli, Napoli (Italy); Juirao, J. [Numerical Analysis TEChnologies S.L. (NATEC), c/ Marqués de San Esteban, 52 Entlo D Gijón (Spain)

    2014-10-15

    Highlights: • Several ITER components have been analyzed from the electromagnetic point of view. • Categorization of DINA load cases is described. • VDEs, MDs and MFD have been studied. • Integral values of forces and moments components versus time have been computed for all the ITER components under study. - Abstract: Fusion for Energy (F4E) is involved in a relevant number of activities in the area of electromagnetic analysis in support of ITER general design and EU in-kind procurement. In particular several ITER components (vacuum vessel, blanket shield modules and first wall panels, test blanket modules, ICRH antenna) are being analyzed from the electromagnetic point of view. In this paper we give an updated description of our main activities, highlighting the main assumptions, objectives, results and conclusions. The plasma instabilities we consider, typically disruptions and VDEs, can be both toroidally symmetric and asymmetric. This implies that, depending on the specific component and loading conditions, FE models we use span from a sector of 10 up to 360° of the ITER machine. The techniques for simulating the electromagnetic phenomena involved in a disruption and the postprocessing of the results to obtain the loads acting on the structures are described. Finally we summarize the typical loads applied to different components and give a critical view of the results.

  14. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  15. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  16. ARP1400 DVI break analysis using the MARS 3.1 multi-D component

    International Nuclear Information System (INIS)

    Hwang, Moon-Kyu; Lim, Hong-Sik; Lee, Seung-Wook; Bae, Sung-Won; Chung, Bub-Dong

    2006-01-01

    The current version of MARS 3.1 has a multi-D component intended to simulate an asymmetric multidimensional fluid behavior in a reactor core, downcomer or in a steam generator, in a more realistic manner. The feature is implemented in the 1-D module of the code. As opposed to the cross flow junction modeling, the multi-D component allows for a lateral momentum transfer as well as a sheer stress. Thus, a full three-Dimensional analysis capability is available as in the case of RELAP5-3D or CATHARE. In this study the multi-D component is applied to the hypothetical accident of a DVI (Direct Vessel Injection) break in the APR1400 plant, and the results are analyzed

  17. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  18. Organizational Design Analysis of Fleet Readiness Center Southwest Components Department

    National Research Council Canada - National Science Library

    Montes, Jose F

    2007-01-01

    .... The purpose of this MBA Project is to analyze the proposed organizational design elements of the FRCSW Components Department that resulted from the integration of the Naval Aviation Depot at North Island (NADEP N.I...

  19. Virtual Models Linked with Physical Components in Construction

    DEFF Research Database (Denmark)

    Sørensen, Kristian Birch

    The use of virtual models supports a fundamental change in the working practice of the construction industry. It changes the primary information carrier (drawings) from simple manually created depictions of the building under construction to visually realistic digital representations that also...... engineering and business development in an iterative and user needs centred system development process. The analysis of future business perspectives presents an extensive number of new working processes that can assist in solving major challenges in the construction industry. Three of the most promising...... practices and development of new ontologies. Based on the experiences gained in this PhD project, some of the important future challenges are also to show the benefits of using modern information and communication technology to practitioners in the construction industry and to communicate this knowledge...

  20. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  1. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  2. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen

    2012-01-01

    represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation

  3. Measurement and Modelling of MIC Components Using Conductive Lithographic Films

    OpenAIRE

    Shepherd, P. R.; Taylor, C.; Evans l, P. S. A.; Harrison, D. J.

    2001-01-01

    Conductive Lithographic Films (CLFs) have previously demonstrated useful properties in printed mi-crowave circuits, combining low cost with high speed of manufacture. In this paper we examine the formation of various passive components via the CLF process, which enables further integration of printed microwave integrated circuits. The printed components include vias, resistors and overlay capacitors, and offer viable alternatives to traditional manufacturing processes for Microwave Inte-grate...

  4. Verification of the component accuracy prediction obtained by physical modelling and the elastic simulation of the die/component interaction

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Andersen, Claus Bo; Wanheim, Tarras

    2001-01-01

    There are three demands on a component that must undergo a die-cavity elasticity analysis. The demands to the product are specified as: (i) to be able to measure the loading profile which results in elestic die-cavity deflections; (ii) to be able to compute the elestic deflections using FE; (iii...

  5. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  6. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting model equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method. The vectors of maximum system displacements as computed by the time history analysis and the five response spectrum analysis methods are presented. (Auth.)

  7. Development of an Automated LIBS Analytical Test System Integrated with Component Control and Spectrum Analysis Capabilities

    International Nuclear Information System (INIS)

    Ding Yu; Tian Di; Chen Feipeng; Chen Pengfei; Qiao Shujun; Yang Guang; Li Chunsheng

    2015-01-01

    The present paper proposes an automated Laser-Induced Breakdown Spectroscopy (LIBS) analytical test system, which consists of a LIBS measurement and control platform based on a modular design concept, and a LIBS qualitative spectrum analysis software and is developed in C#. The platform provides flexible interfacing and automated control; it is compatible with different manufacturer component models and is constructed in modularized form for easy expandability. During peak identification, a more robust peak identification method with improved stability in peak identification has been achieved by applying additional smoothing on the slope obtained by calculation before peak identification. For the purpose of element identification, an improved main lines analysis method, which detects all elements on the spectral peak to avoid omission of certain elements without strong spectral lines, is applied to element identification in the tested LIBS samples. This method also increases the identification speed. In this paper, actual applications have been carried out. According to tests, the analytical test system is compatible with components of various models made by different manufacturers. It can automatically control components to get experimental data and conduct filtering, peak identification and qualitative analysis, etc. on spectral data. (paper)

  8. Modeling and numerical simulation of multi-component flow in porous media

    International Nuclear Information System (INIS)

    Saad, B.

    2011-01-01

    This work deals with the modelization and numerical simulation of two phase multi-component flow in porous media. The study is divided into two parts. First we study and prove the mathematical existence in a weak sense of two degenerate parabolic systems modeling two phase (liquid and gas) two component (water and hydrogen) flow in porous media. In the first model, we assume that there is a local thermodynamic equilibrium between both phases of hydrogen by using the Henry's law. The second model consists of a relaxation of the previous model: the kinetic of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is no longer instantaneous. The second part is devoted to the numerical analysis of those models. Firstly, we propose a numerical scheme to compare numerical solutions obtained with the first model and numerical solutions obtained with the second model where the characteristic time to recover the thermodynamic equilibrium goes to zero. Secondly, we present a finite volume scheme with a phase-by-phase upstream weighting scheme without simplified assumptions on the state law of gas densities. We also validate this scheme on a 2D test cases. (author)

  9. Analysis of the frequency components of X-ray images

    International Nuclear Information System (INIS)

    Matsuo, Satoru; Komizu, Mitsuru; Kida, Tetsuo; Noma, Kazuo; Hashimoto, Keiji; Onishi, Hideo; Masuda, Kazutaka

    1997-01-01

    We examined the relation between the frequency components of x-ray images of the chest and phalanges and their read sizes for digitizing. Images of the chest and phalanges were radiographed using three types of screens and films, and the noise images in background density were digitized with a drum scanner, changing the read sizes. The frequency components for these images were evaluated by converting them to the secondary Fourier to obtain the power spectrum and signal to noise ratio (SNR). After changing the cut-off frequency on the power spectrum to process a low pass filter, we also examined the frequency components of the images in relation to the normalized mean square error (NMSE) for the image converted to reverse Fourier and the original image. Results showed that the frequency components were 2.0 cycles/mm for the chest image and 6.0 cycles/mm for the phalanges. Therefore, it is necessary to collect data applying the read sizes of 200 μm and 50 μm for the chest and phalangeal images, respectively, in order to digitize these images without loss of their frequency components. (author)

  10. Three-component model of solar wind--interstellar medium interaction: some numerical results

    International Nuclear Information System (INIS)

    Baranov, V.; Ermakov, M.; Lebedev, M.

    1981-01-01

    A three-component (electrons, protons, H atoms) model for the interaction between the local interstellar medium and the solar wind is considered. A numerical analysis has been performed to determine how resonance charge exchange in interstellar H atoms that have penetrated the solar wind would affect the two-shock model developed previously by Baranov et al. In particular, if n/sub Hinfinity//n/sub e/infinity>10 (n/sub Hinfinity/, n/sub e/infinity denote the number density of H atoms and electrons in the local ISM) the inner shock may approach the sun as closely as the outer planetary orbits

  11. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  12. Data analysis of x-ray fluorescence holography by subtracting normal component from inverse hologram

    International Nuclear Information System (INIS)

    Happo, Naohisa; Hayashi, Kouichi; Hosokawa, Shinya

    2010-01-01

    X-ray fluorescence holography (XFH) is a powerful technique for determining three-dimensional local atomic arrangements around a specific fluorescing element. However, the raw experimental hologram is predominantly a mixed hologram, i.e., a mixture of hologram generated in both normal and inverse modes, which produces unreliable atomic images. In this paper, we propose a practical subtraction method of the normal component from the inverse XFH data by a Fourier transform for the calculated hologram of a model ZnTe cluster. Many spots originating from the normal components could be properly removed using a mask function, and clear atomic images were reconstructed at adequate positions of the model cluster. This method was successfully applied to the analysis of experimental ZnTe single crystal XFH data. (author)

  13. Isotropic vs. anisotropic components of BAO data: a tool for model selection

    Science.gov (United States)

    Haridasu, Balakrishna S.; Luković, Vladimir V.; Vittorio, Nicola

    2018-05-01

    We conduct a selective analysis of the isotropic (DV) and anisotropic (AP) components of the most recent Baryon Acoustic Oscillations (BAO) data. We find that these components provide significantly different constraints and could provide strong diagnostics for model selection, also in view of more precise data to arrive. For instance, in the ΛCDM model we find a mild tension of ~ 2 σ for the Ωm estimates obtained using DV and AP separately. Considering both Ωk and w as free parameters, we find that the concordance model is in tension with the best-fit values provided by the BAO data alone at 2.2σ. We complemented the BAO data with the Supernovae Ia (SNIa) and Observational Hubble datasets to perform a joint analysis on the ΛCDM model and its standard extensions. By assuming ΛCDM scenario, we find that these data provide H0 = 69.4 ± 1.7 km/s Mpc‑1 as the best-fit value for the present expansion rate. In the kΛCDM scenario we find that the evidence for acceleration using the BAO data alone is more than ~ 5.8σ, which increases to 8.4 σ in our joint analysis.

  14. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  15. Intercity Travel Demand Analysis Model

    Directory of Open Access Journals (Sweden)

    Ming Lu

    2014-01-01

    Full Text Available It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model of main mode choice and access mode choice. At last, an integrated multilevel nested logit model structure system was built. The model system includes trip generation, destination choice, and mode-route choice based on multinomial logit model, and it achieved linkage and feedback of each part through logsum variable. This model was applied in Shenzhen intercity railway passenger demand forecast in 2010 as a case study. As a result, the forecast results were consistent with the actuality. The model's correctness and feasibility were verified.

  16. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  17. Eliminating the Influence of Harmonic Components in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. These forces are seen as harmonic components in the responses, and their influence should be eliminated before extracting the modes in their vicinity. This paper describes a new method based...... on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic...

  18. General model for Pc-based simulation of PWR and BWR plant components

    Energy Technology Data Exchange (ETDEWEB)

    Ratemi, W M; Abomustafa, A M [Faculty of enginnering, alfateh univerity Tripoli, (Libyan Arab Jamahiriya)

    1995-10-01

    In this paper, we present a basic mathematical model derived from physical principles to suit the simulation of PWR-components such as pressurizer, intact steam generator, ruptured steam generator, and the reactor component of a BWR-plant. In our development, we produced an NMMS-package for nuclear modular modelling simulation. Such package is installed on a personal computer and it is designed to be user friendly through color graphics windows interfacing. The package works under three environments, namely, pre-processor, simulation, and post-processor. Our analysis of results using cross graphing technique for steam generator tube rupture (SGTR) accident, yielded a new proposal for on-line monitoring of control strategy of SGTR-accident for nuclear or conventional power plant. 4 figs.

  19. Applying independent component analysis to clinical fMRI at 7 T

    Directory of Open Access Journals (Sweden)

    Simon Daniel Robinson

    2013-09-01

    Full Text Available Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting and parallel imaging reconstruction errors. In this study, the ability of Independent Component Analysis (ICA to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activation with negligible contamination by motion effects. The results of General Linear Model (GLM analysis of these data were, in contrast, heavily contaminated by motion. Secondary motor areas, basal ganglia and thalamus involvement were apparent in ICA results, but there was low capability to isolate activation in the same brain regions in the GLM analysis, indicating that ICA was more sensitive as well as more specific. A method was developed to simplify the assessment of the large number of independent components. Task-related activation components could be automatically identified via intuitive and effective features. These findings demonstrate that ICA is a practical and sensitive analysis approach in high field fMRI studies, particularly where motion is evoked. Promising applications of ICA in clinical fMRI include presurgical planning and the study of pathologies affecting subcortical brain areas.

  20. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  1. Decoding the auditory brain with canonical component analysis.

    Science.gov (United States)

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Design and Application of an Ontology for Component-Based Modeling of Water Systems

    Science.gov (United States)

    Elag, M.; Goodall, J. L.

    2012-12-01

    Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.

  3. Design and analysis of automobile components using industrial procedures

    Science.gov (United States)

    Kedar, B.; Ashok, B.; Rastogi, Nisha; Shetty, Siddhanth

    2017-11-01

    Today’s automobiles depend upon mechanical systems that are crucial for aiding in the movement and safety features of the vehicle. Various safety systems such as Antilock Braking System (ABS) and passenger restraint systems have been developed to ensure that in the event of a collision be it head on or any other type, the safety of the passenger is ensured. On the other side, manufacturers also want their customers to have a good experience while driving and thus aim to improve the handling and the drivability of the vehicle. Electronics systems such as Cruise Control and active suspension systems are designed to ensure passenger comfort. Finally, to ensure optimum and safe driving the various components of a vehicle must be manufactured using the latest state of the art processes and must be tested and inspected with utmost care so that any defective component can be prevented from being sent out right at the beginning of the supply chain. Therefore, processes which can improve the lifetime of their respective components are in high demand and much research and development is done on these processes. With a solid base research conducted, these processes can be used in a much more versatile manner for different components, made up of different materials and under different input conditions. This will help increase the profitability of the process and also upgrade its value to the industry.

  4. Analysis of soft rock mineral components and roadway failure mechanism

    Institute of Scientific and Technical Information of China (English)

    陈杰

    2001-01-01

    The mineral components and microstructure of soft rock sampled from roadway floor inXiagou pit are determined by X-ray diffraction and scanning electron microscope. Ccmbined withthe test of expansion and water softening property of the soft rock, the roadway failure mechanism is analyzed, and the reasonable repair supporting principle of roadway is put forward.

  5. Analysis Of The Executive Components Of The Farmer Field School ...

    African Journals Online (AJOL)

    The purpose of this study was to investigate the executive components of the Farmer Field School (FFS) project in Uromieh county of West Azerbaijan Province, Iran. All the members and non-members (as control group) of FFS pilots in Uromieh county (N= 98) were included in the study. Data were collected by use of ...

  6. Principal Components Analysis of Job Burnout and Coping ...

    African Journals Online (AJOL)

    The key component structure of job burnout were feelings of disgust, insomnia, headaches, weight loss or gain feeling of omniscient, pain of unexplained origin, hopelessness, agitation and workaholics, while the factor structure of coping strategies were development of self realistic picture, retaining hope, asking for help ...

  7. Phenolic components, antioxidant activity, and mineral analysis of ...

    African Journals Online (AJOL)

    In addition to being consumed as food, caper (Capparis spinosa L.) fruits are also used in folk medicine to treat inflammatory disorders, such as rheumatism. C. spinosa L. is rich in phenolic compounds, making it increasingly popular because of its components' potential benefits to human health. We analyzed a number of ...

  8. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  9. A multi-dimensional functional principal components analysis of EEG data.

    Science.gov (United States)

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  10. Building Block Approach' for Structural Analysis of Thermoplastic Composite Components for Automotive Applications

    Science.gov (United States)

    Carello, M.; Amirth, N.; Airale, A. G.; Monti, M.; Romeo, A.

    2017-12-01

    Advanced thermoplastic prepreg composite materials stand out with regard to their ability to allow complex designs with high specific strength and stiffness. This makes them an excellent choice for lightweight automotive components to reduce mass and increase fuel efficiency, while maintaining the functionality of traditional thermosetting prepreg (and mechanical characteristics) and with a production cycle time and recyclability suited to mass production manufacturing. Currently, the aerospace and automotive sectors struggle to carry out accurate Finite Elements (FE) component analyses and in some cases are unable to validate the obtained results. In this study, structural Finite Elements Analysis (FEA) has been done on a thermoplastic fiber reinforced component designed and manufactured through an integrated injection molding process, which consists in thermoforming the prepreg laminate and overmolding the other parts. This process is usually referred to as hybrid molding, and has the provision to reinforce the zones subjected to additional stresses with thermoformed themoplastic prepreg as required and overmolded with a shortfiber thermoplastic resin in single process. This paper aims to establish an accurate predictive model on a rational basis and an innovative methodology for the structural analysis of thermoplastic composite components by comparison with the experimental tests results.

  11. A comparison of response spectrum and direct integration analysis methods as applied to a nuclear component support structure

    International Nuclear Information System (INIS)

    Bryan, B.J.; Flanders, H.E. Jr.

    1992-01-01

    Seismic qualification of Class I nuclear components is accomplished using a variety of analytical methods. This paper compares the results of time history dynamic analyses of a heat exchanger support structure using response spectrum and time history direct integration analysis methods. Dynamic analysis is performed on the detailed component models using the two methods. A nonlinear elastic model is used for both the response spectrum and direct integration methods. A nonlinear model which includes friction and nonlinear springs, is analyzed using time history input by direct integration. The loads from the three cases are compared

  12. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  13. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    Science.gov (United States)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  14. Clustering of metabolic and cardiovascular risk factors in the polycystic ovary syndrome: a principal component analysis.

    Science.gov (United States)

    Stuckey, Bronwyn G A; Opie, Nicole; Cussons, Andrea J; Watts, Gerald F; Burke, Valerie

    2014-08-01

    Polycystic ovary syndrome (PCOS) is a prevalent condition with heterogeneity of clinical features and cardiovascular risk factors that implies multiple aetiological factors and possible outcomes. To reduce a set of correlated variables to a smaller number of uncorrelated and interpretable factors that may delineate subgroups within PCOS or suggest pathogenetic mechanisms. We used principal component analysis (PCA) to examine the endocrine and cardiometabolic variables associated with PCOS defined by the National Institutes of Health (NIH) criteria. Data were retrieved from the database of a single clinical endocrinologist. We included women with PCOS (N = 378) who were not taking the oral contraceptive pill or other sex hormones, lipid lowering medication, metformin or other medication that could influence the variables of interest. PCA was performed retaining those factors with eigenvalues of at least 1.0. Varimax rotation was used to produce interpretable factors. We identified three principal components. In component 1, the dominant variables were homeostatic model assessment (HOMA) index, body mass index (BMI), high density lipoprotein (HDL) cholesterol and sex hormone binding globulin (SHBG); in component 2, systolic blood pressure, low density lipoprotein (LDL) cholesterol and triglycerides; in component 3, total testosterone and LH/FSH ratio. These components explained 37%, 13% and 11% of the variance in the PCOS cohort respectively. Multiple correlated variables from patients with PCOS can be reduced to three uncorrelated components characterised by insulin resistance, dyslipidaemia/hypertension or hyperandrogenaemia. Clustering of risk factors is consistent with different pathogenetic pathways within PCOS and/or differing cardiometabolic outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Ecological, psychological, and cognitive components of reading difficulties: testing the component model of reading in fourth graders across 38 countries.

    Science.gov (United States)

    Chiu, Ming Ming; McBride-Chang, Catherine; Lin, Dan

    2012-01-01

    The authors tested the component model of reading (CMR) among 186,725 fourth grade students from 38 countries (45 regions) on five continents by analyzing the 2006 Progress in International Reading Literacy Study data using measures of ecological (country, family, school, teacher), psychological, and cognitive components. More than 91% of the differences in student difficulty occurred at the country (61%) and classroom (30%) levels (ecological), with less than 9% at the student level (cognitive and psychological). All three components were negatively associated with reading difficulties: cognitive (student's early literacy skills), ecological (family characteristics [socioeconomic status, number of books at home, and attitudes about reading], school characteristics [school climate and resources]), and psychological (students' attitudes about reading, reading self-concept, and being a girl). These results extend the CMR by demonstrating the importance of multiple levels of factors for reading deficits across diverse cultures.

  16. [Quality evaluation of rhubarb dispensing granules based on multi-component simultaneous quantitative analysis and bioassay].

    Science.gov (United States)

    Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He

    2017-07-01

    This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (Panalysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.

  17. Analysis Components of the Digital Consumer Behavior in Romania

    Directory of Open Access Journals (Sweden)

    Cristian Bogdan Onete

    2016-08-01

    Full Text Available This article is investigating the Romanian consumer behavior in the context of the evolution of the online shopping. Given that online stores are a profitable business model in the area of electronic commerce and because the relationship between consumer digital Romania and its decision to purchase products or services on the Internet has not been sufficiently explored, this study aims to identify specific features of the new type of consumer and to examine the level of online shopping in Romania. Therefore a documentary study was carried out with statistic data regarding the volume and the number of transactions of the online shopping in Romania during 2010-2014, the type of products and services that Romanians are searching the Internet for and demographics of these people. In addition, to study more closely the online consumer behavior, and to interpret the detailed secondary data provided, an exploratory research was performed as a structured questionnaire with five closed questions on the distribution of individuals according to the gender category they belong (male or female; decision to purchase products / services in the virtual environment in the past year; the source of the goods / services purchased (Romanian or foreign sites; factors that have determined the consumers to buy products from foreign sites; categories of products purchased through online transactions from foreign merchants. The questionnaire was distributed electronically via Facebook social network users and the data collected was processed directly in the Facebook official app to create and interpret responses to surveys. The results of this research correlated with the official data reveals the following characteristics of the digital consumer in Romania: atypical European consumer, interested more in online purchases from abroad, influenced by the quality and price of the purchase. This paper assumed a careful analysis of the online acquisitions phenomenon and also

  18. Building Component Library: An Online Repository to Facilitate Building Energy Model Creation; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, K.; Long, N.; Swindler, A.

    2012-05-01

    This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.

  19. Analysis and test of insulated components for rotary engine

    Science.gov (United States)

    Badgley, Patrick R.; Doup, Douglas; Kamo, Roy

    1989-01-01

    The direct-injection stratified-charge (DISC) rotary engine, while attractive for aviation applications due to its light weight, multifuel capability, and potentially low fuel consumption, has until now required a bulky and heavy liquid-cooling system. NASA-Lewis has undertaken the development of a cooling system-obviating, thermodynamically superior adiabatic rotary engine employing state-of-the-art thermal barrier coatings to thermally insulate engine components. The thermal barrier coating material for the cast aluminum, stainless steel, and ductile cast iron components was plasma-sprayed zirconia. DISC engine tests indicate effective thermal barrier-based heat loss reduction, but call for superior coefficient-of-thermal-expansion matching of materials and better tribological properties in the coatings used.

  20. COMPONENTS OF THE UNEMPLOYMENT ANALYSIS IN CONTEMPORARY ECONOMIES

    Directory of Open Access Journals (Sweden)

    Ion Enea-SMARANDACHE

    2010-03-01

    Full Text Available The unemployment is a permanent phenomenon in majority countries of the world, either with advanced economies, either in course of developed economies, and the implications and the consequences are more complexes, so that, practically, the fight with unemployment becomes a fundamental objective for the economy politics. In context, the authors proposed to set apart essentially components for unemployment analyse with the scope of identification the measures and the instruments of counteracted.

  1. Analysis of Femtosecond Timing Noise and Stability in Microwave Components

    International Nuclear Information System (INIS)

    2011-01-01

    To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/ o C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.

  2. Bayesian Correlated Component Analysis for inference of joint EEG activation

    DEFF Research Database (Denmark)

    Poulsen, Andreas Trier; Kamronn, Simon Due; Parra, Lucas

    2014-01-01

    We propose a probabilistic generative multi-view model to test the representational universality of human information processing. The model is tested in simulated data and in a well-established benchmark EEG dataset.......We propose a probabilistic generative multi-view model to test the representational universality of human information processing. The model is tested in simulated data and in a well-established benchmark EEG dataset....

  3. Modelling the effect of mixture components on permeation through skin.

    Science.gov (United States)

    Ghafourian, T; Samaras, E G; Brooks, J D; Riviere, J E

    2010-10-15

    A vehicle influences the concentration of penetrant within the membrane, affecting its diffusivity in the skin and rate of transport. Despite the huge amount of effort made for the understanding and modelling of the skin absorption of chemicals, a reliable estimation of the skin penetration potential from formulations remains a challenging objective. In this investigation, quantitative structure-activity relationship (QSAR) was employed to relate the skin permeation of compounds to the chemical properties of the mixture ingredients and the molecular structures of the penetrants. The skin permeability dataset consisted of permeability coefficients of 12 different penetrants each blended in 24 different solvent mixtures measured from finite-dose diffusion cell studies using porcine skin. Stepwise regression analysis resulted in a QSAR employing two penetrant descriptors and one solvent property. The penetrant descriptors were octanol/water partition coefficient, logP and the ninth order path molecular connectivity index, and the solvent property was the difference between boiling and melting points. The negative relationship between skin permeability coefficient and logP was attributed to the fact that most of the drugs in this particular dataset are extremely lipophilic in comparison with the compounds in the common skin permeability datasets used in QSAR. The findings show that compounds formulated in vehicles with small boiling and melting point gaps will be expected to have higher permeation through skin. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.396. The chemical space of the dataset was compared with that of the known skin permeability datasets and gaps were identified for future skin permeability measurements. Copyright 2010 Elsevier B.V. All rights reserved.

  4. MODELING OF SYSTEM COMPONENTS OF EDUCATIONAL PROGRAMS IN HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    E. K. Samerkhanova

    2016-01-01

    Full Text Available Based on the principles of System Studies, describes the components of the educational programs of the control system. Educational Program Management is a set of substantive, procedural, resource, subject-activity, efficiently and evaluation components, which ensures the integrity of integration processes at all levels of education. Ensuring stability and development in the management of educational programs is achieved by identifying and securing social norms, the status of the educational institution program managers to ensure the achievement of modern quality of education.Content Management provides the relevant educational content in accordance with the requirements of the educational and professional standards; process control ensures the efficient organization of rational distribution process flows; Resource Management provides optimal distribution of personnel, information and methodological, material and technical equipment of the educational program; contingent management provides subject-activity interaction of participants of the educational process; quality control ensures the quality of educational services.

  5. Implementing components of the routines-based model

    OpenAIRE

    McWilliam, Robin; Fernández Valero, Rosa

    2015-01-01

    The MBR is comprised of 17 components that can generally be grouped into practices related to (a) functional assessment and intervention planning (for example, Routines-Based Interview), (b) organization of services (including location and staffing), (c) service delivery to children and families (using a consultative approach with families and teachers, integrated therapy), (d) classroom organization (for example, classroom zones), and (e) supervision and training through ch...

  6. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  7. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    International Nuclear Information System (INIS)

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2012-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.

  8. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  9. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  10. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  11. Component simulation in problems of calculated model formation of automatic machine mechanisms

    Directory of Open Access Journals (Sweden)

    Telegin Igor

    2017-01-01

    Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.

  12. Level shift two-components autoregressive conditional heteroscedasticity modelling for WTI crude oil market

    Science.gov (United States)

    Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow

    2017-04-01

    This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.

  13. A molecular systems approach to modelling human skin pigmentation: identifying underlying pathways and critical components.

    Science.gov (United States)

    Raghunath, Arathi; Sambarey, Awanti; Sharma, Neha; Mahadevan, Usha; Chandra, Nagasuma

    2015-04-29

    Ultraviolet radiations (UV) serve as an environmental stress for human skin, and result in melanogenesis, with the pigment melanin having protective effects against UV induced damage. This involves a dynamic and complex regulation of various biological processes that results in the expression of melanin in the outer most layers of the epidermis, where it can exert its protective effect. A comprehensive understanding of the underlying cross talk among different signalling molecules and cell types is only possible through a systems perspective. Increasing incidences of both melanoma and non-melanoma skin cancers necessitate the need to better comprehend UV mediated effects on skin pigmentation at a systems level, so as to ultimately evolve knowledge-based strategies for efficient protection and prevention of skin diseases. A network model for UV-mediated skin pigmentation in the epidermis was constructed and subjected to shortest path analysis. Virtual knock-outs were carried out to identify essential signalling components. We describe a network model for UV-mediated skin pigmentation in the epidermis. The model consists of 265 components (nodes) and 429 directed interactions among them, capturing the manner in which one component influences the other and channels information. Through shortest path analysis, we identify novel signalling pathways relevant to pigmentation. Virtual knock-outs or perturbations of specific nodes in the network have led to the identification of alternate modes of signalling as well as enabled determining essential nodes in the process. The model presented provides a comprehensive picture of UV mediated signalling manifesting in human skin pigmentation. A systems perspective helps provide a holistic purview of interconnections and complexity in the processes leading to pigmentation. The model described here is extensive yet amenable to expansion as new data is gathered. Through this study, we provide a list of important proteins essential

  14. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  15. Modeling dynamics of biological and chemical components of aquatic ecosystems

    International Nuclear Information System (INIS)

    Lassiter, R.R.

    1975-05-01

    To provide capability to model aquatic ecosystems or their subsystems as needed for particular research goals, a modeling strategy was developed. Submodels of several processes common to aquatic ecosystems were developed or adapted from previously existing ones. Included are submodels for photosynthesis as a function of light and depth, biological growth rates as a function of temperature, dynamic chemical equilibrium, feeding and growth, and various types of losses to biological populations. These submodels may be used as modules in the construction of models of subsystems or ecosystems. A preliminary model for the nitrogen cycle subsystem was developed using the modeling strategy and applicable submodels. (U.S.)

  16. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  17. Seismic fragility analysis of structural components for HFBR facilities

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.

    1992-01-01

    The paper presents a summary of recently completed seismic fragility analyses of the HFBR facilities. Based on a detailed review of past PRA studies, various refinements were made regarding the strength and ductility evaluation of structural components. Available laboratory test data were analysed to evaluate the formulations used to predict the ultimate strength and deformation capacities of steel, reinforced concrete and masonry structures. The biasness and uncertainties were evaluated within the framework of the fragility evaluation methods widely accepted in the nuclear industry. A few examples of fragility calculations are also included to illustrate the use of the presented formulations

  18. The ethical component of professional competence in nursing: an analysis.

    Science.gov (United States)

    Paganini, Maria Cristina; Yoshikawa Egry, Emiko

    2011-07-01

    The purpose of this article is to initiate a philosophical discussion about the ethical component of professional competence in nursing from the perspective of Brazilian nurses. Specifically, this article discusses professional competence in nursing practice in the Brazilian health context, based on two different conceptual frameworks. The first framework is derived from the idealistic and traditional approach while the second views professional competence through the lens of historical and dialectical materialism theory. The philosophical analyses show that the idealistic view of professional competence differs greatly from practice. Combining nursing professional competence with philosophical perspectives becomes a challenge when ideals are opposed by the reality and implications of everyday nursing practice.

  19. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  20. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters