Energy Technology Data Exchange (ETDEWEB)
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Dual Component Removable Partial Denture shows improved ...
African Journals Online (AJOL)
STORAGESEVER
2009-02-18
Feb 18, 2009 ... 2Faculty of Dentistry, the University of Hong Kong, Hong Kong. 3Department of ... an example of poor oral condition caused mainly by periodontitis, and ... working model of the Dual Component Removable Partial Denture.
Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun
2017-03-01
k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Dual Component Removable Partial Denture shows improved ...
African Journals Online (AJOL)
Dual Component Removable Partial Denture (DuCo RPD) is composed of a double base; lower and upper. The lower base, where the artificial teeth are attached, acts as a support and is in contact with the alveolar ridges and oral mucosa. Clasps are designed on the upper base, which acts towards the retention and ...
International Nuclear Information System (INIS)
Gu Haiwei; Pan Zhengzheng; Xi Bowei; Asiago, Vincent; Musselman, Brian; Raftery, Daniel
2011-01-01
Nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS) are the two most commonly used analytical tools in metabolomics, and their complementary nature makes the combination particularly attractive. A combined analytical approach can improve the potential for providing reliable methods to detect metabolic profile alterations in biofluids or tissues caused by disease, toxicity, etc. In this paper, 1 H NMR spectroscopy and direct analysis in real time (DART)-MS were used for the metabolomics analysis of serum samples from breast cancer patients and healthy controls. Principal component analysis (PCA) of the NMR data showed that the first principal component (PC1) scores could be used to separate cancer from normal samples. However, no such obvious clustering could be observed in the PCA score plot of DART-MS data, even though DART-MS can provide a rich and informative metabolic profile. Using a modified multivariate statistical approach, the DART-MS data were then reevaluated by orthogonal signal correction (OSC) pretreated partial least squares (PLS), in which the Y matrix in the regression was set to the PC1 score values from the NMR data analysis. This approach, and a similar one using the first latent variable from PLS-DA of the NMR data resulted in a significant improvement of the separation between the disease samples and normals, and a metabolic profile related to breast cancer could be extracted from DART-MS. The new approach allows the disease classification to be expressed on a continuum as opposed to a binary scale and thus better represents the disease and healthy classifications. An improved metabolic profile obtained by combining MS and NMR by this approach may be useful to achieve more accurate disease detection and gain more insight regarding disease mechanisms and biology.
Gu, Haiwei; Pan, Zhengzheng; Xi, Bowei; Asiago, Vincent; Musselman, Brian; Raftery, Daniel
2011-02-07
Nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS) are the two most commonly used analytical tools in metabolomics, and their complementary nature makes the combination particularly attractive. A combined analytical approach can improve the potential for providing reliable methods to detect metabolic profile alterations in biofluids or tissues caused by disease, toxicity, etc. In this paper, (1)H NMR spectroscopy and direct analysis in real time (DART)-MS were used for the metabolomics analysis of serum samples from breast cancer patients and healthy controls. Principal component analysis (PCA) of the NMR data showed that the first principal component (PC1) scores could be used to separate cancer from normal samples. However, no such obvious clustering could be observed in the PCA score plot of DART-MS data, even though DART-MS can provide a rich and informative metabolic profile. Using a modified multivariate statistical approach, the DART-MS data were then reevaluated by orthogonal signal correction (OSC) pretreated partial least squares (PLS), in which the Y matrix in the regression was set to the PC1 score values from the NMR data analysis. This approach, and a similar one using the first latent variable from PLS-DA of the NMR data resulted in a significant improvement of the separation between the disease samples and normals, and a metabolic profile related to breast cancer could be extracted from DART-MS. The new approach allows the disease classification to be expressed on a continuum as opposed to a binary scale and thus better represents the disease and healthy classifications. An improved metabolic profile obtained by combining MS and NMR by this approach may be useful to achieve more accurate disease detection and gain more insight regarding disease mechanisms and biology. Copyright © 2010 Elsevier B.V. All rights reserved.
Towards Cognitive Component Analysis
DEFF Research Database (Denmark)
Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan
2005-01-01
Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...
Fluid description of multi-component solar partially ionized plasma
International Nuclear Information System (INIS)
Khomenko, E.; Collados, M.; Vitas, N.; Díaz, A.
2014-01-01
We derive self-consistent formalism for the description of multi-component partially ionized solar plasma, by means of the coupled equations for the charged and neutral components for an arbitrary number of chemical species, and the radiation field. All approximations and assumptions are carefully considered. Generalized Ohm's law is derived for the single-fluid and two-fluid formalism. Our approach is analytical with some order-of-magnitude support calculations. After general equations are developed, we particularize to some frequently considered cases as for the interaction of matter and radiation
Multiscale principal component analysis
International Nuclear Information System (INIS)
Akinduko, A A; Gorban, A N
2014-01-01
Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis
DEFF Research Database (Denmark)
Feng, Ling
2008-01-01
This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...
Energy Technology Data Exchange (ETDEWEB)
Lee, Youngbok [Department of Chemistry, College of Natural Sciences, Hanyang University Haengdang-Dong, Seoul 133-791 (Korea, Republic of); Chung, Hoeil [Department of Chemistry, College of Natural Sciences, Hanyang University Haengdang-Dong, Seoul 133-791 (Korea, Republic of)]. E-mail: hoeil@hanyang.ac.kr; Arnold, Mark A. [Optical Science and Technology Center and Department of Chemistry, University of Iowa, Iowa City, IA 52242 (United States)
2006-07-14
Pure component selectivity analysis (PCSA) was successfully utilized to enhance the robustness of a partial least squares (PLS) model by examining the selectivity of a given component to other components. The samples used in this study were composed of NH{sub 4}OH, H{sub 2}O{sub 2} and H{sub 2}O, a popular etchant solution in the electronic industry. Corresponding near-infrared (NIR) spectra (9000-7500 cm{sup -1}) were used to build PLS models. The selective determination of H{sub 2}O{sub 2} without influences from NH{sub 4}OH and H{sub 2}O was a key issue since its molecular structure is similar to that of H{sub 2}O and NH{sub 4}OH also has a hydroxyl functional group. The best spectral ranges for the determination of NH{sub 4}OH and H{sub 2}O{sub 2} were found with the use of moving window PLS (MW-PLS) and corresponding selectivity was examined by pure component selectivity analysis. The PLS calibration for NH{sub 4}OH was free from interferences from the other components due to the presence of its unique NH absorption bands. Since the spectral variation from H{sub 2}O{sub 2} was broadly overlapping and much less distinct than that from NH{sub 4}OH, the selectivity and prediction performance for the H{sub 2}O{sub 2} calibration were sensitively varied depending on the spectral ranges and number of factors used. PCSA, based on the comparison between regression vectors from PLS and the net analyte signal (NAS), was an effective method to prevent over-fitting of the H{sub 2}O{sub 2} calibration. A robust H{sub 2}O{sub 2} calibration model with minimal interferences from other components was developed. PCSA should be included as a standard method in PLS calibrations where prediction error only is the usual measure of performance.
Euler principal component analysis
Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Independent component analysis: recent advances
Hyv?rinen, Aapo
2013-01-01
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...
Numerical Analysis of Partial Differential Equations
Lui, S H
2011-01-01
A balanced guide to the essential techniques for solving elliptic partial differential equations Numerical Analysis of Partial Differential Equations provides a comprehensive, self-contained treatment of the quantitative methods used to solve elliptic partial differential equations (PDEs), with a focus on the efficiency as well as the error of the presented methods. The author utilizes coverage of theoretical PDEs, along with the nu merical solution of linear systems and various examples and exercises, to supply readers with an introduction to the essential concepts in the numerical analysis
Shifted Independent Component Analysis
DEFF Research Database (Denmark)
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2007-01-01
Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carried...
Multiview Bayesian Correlated Component Analysis
DEFF Research Database (Denmark)
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
International Nuclear Information System (INIS)
Reynolds, Jacob G.
2013-01-01
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components
Partial wave analysis using graphics processing units
Energy Technology Data Exchange (ETDEWEB)
Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)
2010-04-01
Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.
Exploiting partial knowledge for efficient model analysis
Macedo, Nuno; Cunha, Alcino; Pessoa, Eduardo José Dias
2017-01-01
The advancement of constraint solvers and model checkers has enabled the effective analysis of high-level formal specification languages. However, these typically handle a specification in an opaque manner, amalgamating all its constraints in a single monolithic verification task, which often proves to be a performance bottleneck. This paper addresses this issue by proposing a solving strategy that exploits user-provided partial knowledge, namely by assigning symbolic bounds to the problem’s ...
Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun
2015-09-01
This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two
Generalized structured component analysis a component-based approach to structural equation modeling
Hwang, Heungsun
2014-01-01
Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...
Jiao, T; Chang, T; Caputo, A A
2009-03-01
To photoelastically examine load transfer by unilateral distal extension removable partial dentures with supporting and retentive components made of the lower stiffness polyacetal resins. A mandibular photoelastic model, with edentulous space distal to the right second premolar and missing the left first molar, was constructed to determine the load transmission characteristics of a unilateral distal extension base removable partial denture. Individual simulants were used for tooth structure, periodontal ligament, and alveolar bone. Three designs were fabricated: a major connector and clasps made from polyacetal resin, a metal framework as the major connector with polyacetal resin clasp and denture base, and a traditional metal framework I-bar removable partial denture. Simulated posterior bilateral and unilateral occlusal loads were applied to the removable partial dentures. Under bilateral and left side unilateral loading, the highest stress was observed adjacent to the left side posterior teeth with the polyacetal removable partial denture. The lowest stress was seen with the traditional metal framework. Unilateral loads on the right edentulous region produced similar distributed stress under the denture base with all three designs but a somewhat higher intensity with the polyacetal framework. The polyacetal resin removable partial denture concentrated the highest stresses to the abutment and the bone. The traditional metal framework I-bar removable partial denture most equitably distributed force. The hybrid design that combined a metal framework and polyacetal clasp and denture base may be a viable alternative when aesthetics are of primary concern.
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
On Bayesian Principal Component Analysis
Czech Academy of Sciences Publication Activity Database
Šmídl, Václav; Quinn, A.
2007-01-01
Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a
Structural analysis of nuclear components
International Nuclear Information System (INIS)
Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.
1983-05-01
THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)
Partial correlation analysis method in ultrarelativistic heavy-ion collisions
Olszewski, Adam; Broniowski, Wojciech
2017-11-01
We argue that statistical data analysis of two-particle longitudinal correlations in ultrarelativistic heavy-ion collisions may be efficiently carried out with the technique of partial covariance. In this method, the spurious event-by-event fluctuations due to imprecise centrality determination are eliminated via projecting out the component of the covariance influenced by the centrality fluctuations. We bring up the relationship of the partial covariance to the conditional covariance. Importantly, in the superposition approach, where hadrons are produced independently from a collection of sources, the framework allows us to impose centrality constraints on the number of sources rather than hadrons, that way unfolding of the trivial fluctuations from statistical hadronization and focusing better on the initial-state physics. We show, using simulated data from hydrodynamics followed with statistical hadronization, that the technique is practical and very simple to use, giving insight into the correlations generated in the initial stage. We also discuss the issues related to separation of the short- and long-range components of the correlation functions and show that in our example the short-range component from the resonance decays is largely reduced by considering pions of the same sign. We demonstrate the method explicitly on the cases where centrality is determined with a single central control bin or with two peripheral control bins.
Experience of partial dismantling and large component removal of light water reactors
International Nuclear Information System (INIS)
Dubourg, M.
1987-01-01
Not any of the French PWR reactors need to be decommissioned before the next decade or early 2000. However, feasibility studies of decommissioning have been undertaken and several dismantling scenarios have been considered including the dismantling of four PWR units and the on-site entombment of the active components into a reactor building for interim disposal. In addition to theoretical evaluation of radwaste volume and activity, several operations of partial dismantling of active components and decontamination activities have been conducted in view of dismantling for both PWR and BWR units. By analyzing the concept of both 900 and 1300 MWe PWR's, it appears that the design improvements taken into account for reducing occupational dose exposure of maintenance personnel and the development of automated tools for performing maintenance and repairs of major components, contribute to facilitate future dismantling and decommissioning operations
Barrelet zeros in partial wave analysis
International Nuclear Information System (INIS)
Baker, R.D.
1976-01-01
The formalism of Barrelet zeros is discussed. Spinless scattering is described to introduce the idea, then the more usual case of 0 - 1/2 + → 0 - 1/2 + scattering. The zeros are regarded here only as a means to an end, viz the partial waves. The extraction of these is given in detail, and ambiguities are discussed at length. (author)
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
Calculation of partial molar volume of components in supercritical ammonia synthesis system
Institute of Scientific and Technical Information of China (English)
Cunwen WANG; Chuanbo YU; Wen CHEN; Weiguo WANG; Yuanxin WU; Junfeng ZHANG
2008-01-01
The partial molar volumes of components in supercritical ammonia synthesis system are calculated in detail by the calculation formula of partial molar volume derived from the R-K equation of state under different conditions. The objectives are to comprehend phase beha-vior of components and to provide the theoretic explana-tion and guidance for probing novel processes of ammonia synthesis under supercritical conditions. The conditions of calculation are H2/N2= 3, at a concentra-tion of NH3 in synthesis gas ranging from 2% to 15%, Concentration of medium in supercritical ammonia syn-thesis system ranging from 20% to 50%, temperature ran-ging from 243 K to 699 K and pressure ranging from 0.1 MPa to 187 MPa. The results show that the ammonia synthesis system can reach supercritical state by adding a suitable supercritical medium and then controlling the reaction conditions. It is helpful for the supercritical ammonia synthesis that medium reaches supercritical state under the conditions of the corresponding total pres-sure and components near the normal temperature or near the critical temperature of medium or in the range of tem-perature of industrialized ammonia synthesis.
Model reduction by weighted Component Cost Analysis
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Fusion-component lifetime analysis
International Nuclear Information System (INIS)
Mattas, R.F.
1982-09-01
A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR
Component of the risk analysis
International Nuclear Information System (INIS)
Martinez, I.; Campon, G.
2013-01-01
The power point presentation reviews issues like analysis of risk (Codex), management risk, preliminary activities manager, relationship between government and industries, microbiological danger and communication of risk
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
Kou, Jisheng; Sun, Shuyu
2016-01-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic
Ghaffar, Farhan A.
2016-11-01
Typical microwave components such as antennas are large in size and occupy considerable space. Since multiple standards are utilized in modern day systems and thus multiple antennas are required, it is best if a single component can be reconfigured or tuned to various bands. Similarly phase shifters to provide beam scanning and polarization reconfigurable antennas are important for modern day congested wireless systems. Tunability of antennas or phase shifting between antenna elements has been demonstrated using various techniques which include magnetically tunable components on ferrite based substrates. Although this method has shown promising results it also has several issues due to the use of large external electromagnets and operation in the magnetically saturated state. These issues include the device being bulky, inefficient, non-integrable and expensive. In this thesis, we have tried to resolve the above mentioned issues of large size and large power requirement by replacing the large electromagnets with embedded bias windings and also by operating the ferrites in the partially magnetized state. New theoretical models and simulation methodology have been used to evaluate the performance of the microwave passive components in the partially magnetized state. A multilayer ferrite Low Temperature Cofired Ceramic (LTCC) tape system has been used to verify the performance experimentally. There exists a good agreement between the theoretical, simulation and measurement results. Tunable antennas with tuning range of almost 10 % and phase shifter with an FoM of 83.2/dB have been demonstrated in this work, however the major contribution is that this has been achieved with bias fields that are 90 % less than the typically reported values in the literature. Finally, polarization reconfigurability has also been demonstrated for a circular patch antenna using a low cost additive manufacturing technique. The results are promising and indicate that highly integrated
Physics analysis of the gang partial rod drive event
International Nuclear Information System (INIS)
Boman, C.; Frost, R.L.
1992-08-01
During the routine positioning of partial-length control rods in Gang 3 on the afternoon of Monday, July 27, 1992, the partial-length rods continued to drive into the reactor even after the operator released the controlling toggle switch. In response to this occurrence, the Safety Analysis and Engineering Services Group (SAEG) requested that the Applied Physics Group (APG) analyze the gang partial rod drive event. Although similar accident scenarios were considered in analysis for Chapter 15 of the Safety Analysis Report (SAR), APG and SAEG conferred and agreed that this particular type of gang partial-length rod motion event was not included in the SAR. This report details this analysis
Nutritional and amino acid analysis of raw, partially fermented and ...
African Journals Online (AJOL)
African Journal of Food, Agriculture, Nutrition and Development ... The nutritional and amino acid analysis of raw and fermented seeds of Parkia ... between 4.27 and 8.33 % for the fully fermented and the partially fermented seeds, respectively.
Directory of Open Access Journals (Sweden)
Ramesh Kekunnaya
2013-01-01
Full Text Available Background: The management of Duane retraction syndrome (DRS is challenging and may become more difficult if an associated accommodative component due to high hyperopia is present. The purpose of this study is to review clinical features and outcomes in patients with partially accommodative esotropia and DRS. Setting and Design: Retrospective, non-comparative case series. Materials and Methods: Six cases of DRS with high hyperopia were reviewed. Results: Of the patients studied, the mean age of presentation was 1.3 years (range: 0.5-2.5 years. The mean amount of hyperopia was + 5D (range: 3.50-8.50 in both eyes. The mean follow up period was 7 years (range: 4 months-12 years. Five cases were unilateral while one was bilateral. Four cases underwent vertical rectus muscle transposition (VRT and one had medial rectus recession prior to presentation; all were given optical correction. Two (50% of the four patients who underwent vertical rectus transposition cases developed consecutive exotropia, one of whom did not have spectacles prescribed pre-operatively. All other cases (four had minimal residual esotropia and face turn at the last follow-up with spectacle correction. Conclusion: Patients with Duane syndrome can have an accommodative component to their esotropia, which is crucial to detect and correct prior to surgery to decrease the risk of long-term over-correction. Occasionally, torticollis in Duane syndrome can be satisfactorily corrected with spectacles alone.
COPD phenotype description using principal components analysis
DEFF Research Database (Denmark)
Roy, Kay; Smith, Jacky; Kolsum, Umme
2009-01-01
BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua
2015-01-01
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior
NEPR Principle Component Analysis - NOAA TIFF Image
National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...
Structured Performance Analysis for Component Based Systems
Salmi , N.; Moreaux , Patrice; Ioualalen , M.
2012-01-01
International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...
Recommended practice for process sampling for partial pressure analysis
International Nuclear Information System (INIS)
Blessing, James E.; Ellefson, Robert E.; Raby, Bruce A.; Brucker, Gerardo A.; Waits, Robert K.
2007-01-01
This Recommended Practice describes and recommends various procedures and types of apparatus for obtaining representative samples of process gases from >10 -2 Pa (10 -4 Torr) for partial pressure analysis using a mass spectrometer. The document was prepared by a subcommittee of the Recommended Practices Committee of the American Vacuum Society. The subcommittee was comprised of vacuum users and manufacturers of mass spectrometer partial pressure analyzers who have practical experience in the sampling of process gas atmospheres
Statistical approach to partial equilibrium analysis
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
Constrained principal component analysis and related techniques
Takane, Yoshio
2013-01-01
In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre
Analysis Method for Integrating Components of Product
Energy Technology Data Exchange (ETDEWEB)
Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)
2017-04-15
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Analysis Method for Integrating Components of Product
International Nuclear Information System (INIS)
Choi, Jun Ho; Lee, Kun Sang
2017-01-01
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Fast grasping of unknown objects using principal component analysis
Lei, Qujiang; Chen, Guangming; Wisse, Martijn
2017-09-01
Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.
Component evaluation testing and analysis algorithms.
Energy Technology Data Exchange (ETDEWEB)
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
Kou, Jisheng
2017-12-09
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.
Beutels, P; Edmunds, W J; Smith, R D
2008-11-01
We argue that traditional health economic analysis is ill-equipped to estimate the cost effectiveness and cost benefit of interventions that aim at controlling and/or preventing public health emergencies of international concern (such as pandemic influenza or severe acute respiratory syndrome). The implicit assumption of partial equilibrium within both the health sector itself and--if a wider perspective is adopted--the economy as a whole would be violated by such emergencies. We propose an alternative, with the specific aim of accounting for the behavioural changes and capacity problems that are expected to occur when such an outbreak strikes. Copyright (c) 2008 John Wiley & Sons, Ltd.
Nonlinear analysis of shear deformable beam-columns partially ...
African Journals Online (AJOL)
In this paper, a boundary element method is developed for the nonlinear analysis of shear deformable beam-columns of arbitrary doubly symmetric simply or multiply connected constant cross section, partially supported on tensionless Winkler foundation, undergoing moderate large deflections under general boundary ...
SYSTEMATIZATION AND ANALYSIS OF PARTIALLY AND FULLY HOMOMORPHIC CRYPTOSYSTEM
Directory of Open Access Journals (Sweden)
A. V. Epishkina
2016-12-01
Full Text Available In this article provides an overview of the known partially and fully homomorphic cryptosystem, such as: RSA, ElGamal, Paillier, Gentry and Halevi. Justified the homomorphic properties of the considered cryptosystems. The comparative analysis of the homomorphic encryption algorithms has been committed
Function spaces and partial differential equations volume 2 : contemporary analysis
Taheri, Ali
2015-01-01
This is a book written primarily for graduate students and early researchers in the fields of Analysis and Partial Differential Equations (PDEs). Coverage of the material is essentially self-contained, extensive and novel with great attention to details and rigour.
SLAC three-body partial wave analysis system
International Nuclear Information System (INIS)
Aston, D.; Lasinski, T.A.; Sinervo, P.K.
1985-10-01
We present a heuristic description of the SLAC-LBL three-meson partial wave model, and describe how we have implemented it at SLAC. The discussion details the assumptions of the model and the analysis, and emphasizes the methods we have used to prepare and fit the data. 28 refs., 12 figs., 1 tab
Towards a simple method of analysis for partially prestressed concrete
Bruggeling, A.S.G.
1983-01-01
This report examines the question whether, and to what extent, it is possible to leave the time-dependent effects out of account in the analysis of partially prestressed concrete, at least in so far as they relate to the redistribution of the stresses over the cross-section.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Experimental and principal component analysis of waste ...
African Journals Online (AJOL)
The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...
Principal Component Analysis as an Efficient Performance ...
African Journals Online (AJOL)
This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...
Independent component analysis for understanding multimedia content
DEFF Research Database (Denmark)
Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan
2002-01-01
Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...
Probabilistic Principal Component Analysis for Metabolomic Data.
LENUS (Irish Health Repository)
Nyamundanda, Gift
2010-11-23
Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.
PCA: Principal Component Analysis for spectra modeling
Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas
2012-07-01
The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.
BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS
Directory of Open Access Journals (Sweden)
Andrea Giovanni Spelta
2007-05-01
Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.
Use of correspondence analysis partial least squares on linear and unimodal data
DEFF Research Database (Denmark)
Frisvad, Jens Christian; Norsker, Merete
1996-01-01
Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....
Multi-spectrometer calibration transfer based on independent component analysis.
Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong
2018-02-26
Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.
ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.
Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.
2011-01-01
ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.
International Nuclear Information System (INIS)
Milde, T; Schwab, K; Walther, M; Eiselt, M; Witte, H; Schelenz, C; Voss, A
2011-01-01
Time-variant partial directed coherence (tvPDC) is used for the first time in a multivariate analysis of heart rate variability (HRV), respiratory movements (RMs) and (systolic) arterial blood pressure. It is shown that respiration-related HRV components which also occur at other frequencies besides the RM frequency (= respiratory sinus arrhythmia, RSA) can be identified. These additional components are known to be an effect of the 'half-the-mean-heart-rate-dilemma' ('cardiac aliasing' CA). These CA components may contaminate the entire frequency range of HRV and can lead to misinterpretation of the RSA analysis. TvPDC analysis of simulated and clinical data (full-term neonates and sedated patients) reveals these contamination effects and, in addition, the respiration-related CA components can be separated from the RSA component and the Traube–Hering–Mayer wave. It can be concluded that tvPDC can be beneficially applied to avoid misinterpretations in HRV analyses as well as to quantify partial correlative interaction properties between RM and RSA
Improvement of Binary Analysis Components in Automated Malware Analysis Framework
2017-02-21
AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program
Fault tree analysis with multistate components
International Nuclear Information System (INIS)
Caldarola, L.
1979-02-01
A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
A Genealogical Interpretation of Principal Components Analysis
McVean, Gil
2009-01-01
Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557
Radar fall detection using principal component analysis
Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem
2016-05-01
Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.
Independent Component Analysis in Multimedia Modeling
DEFF Research Database (Denmark)
Larsen, Jan
2003-01-01
largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...
Analysis of spiral components in 16 galaxies
International Nuclear Information System (INIS)
Considere, S.; Athanassoula, E.
1988-01-01
A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies
Structural analysis of NPP components and structures
International Nuclear Information System (INIS)
Saarenheimo, A.; Keinaenen, H.; Talja, H.
1998-01-01
Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)
Reformulating Component Identification as Document Analysis Problem
Gross, H.G.; Lormans, M.; Zhou, J.
2007-01-01
One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can
Meta-analysis of adjunctive levetiracetam in refractory partial sei
Directory of Open Access Journals (Sweden)
ZHANG Ying
2012-10-01
Full Text Available Objective To evaluate the effects and tolerability of adjunctive levetiracetam (LEV in refractory partial seizures. Methods Relevant research articles about randomized controlled trials of adjunctive LEV in refractory partial seizures from January 1998 to December 2010 were retrieved from Cochrane Library, MEDLINE, EMbase, Social Sciences Citation Index (SSCI, VIP, Chinese National Knowledge Infrastructure (CNKI database, China Biology Medicine (CBM. Two reviewers independently evaluated the quality of the included articles and abstracted the data. A Meta-analysis was conducted by using RevMan 5.0 software. Results According to the enrollment criteria, eleven prospective, randomized controlled clinical trials with a total of 1192 in LEV group and 789 in placebo group were finally selected. The reduction in three endpoints (a 50% or greater reduction of partial seizure frequency per week, a 75% or greater reduction of partial seizure frequency per week and seizure free was significant in LEV group than placebo group. There was no significance between LEV group and placebo group in the withdrawl rate (1000 mg/d: OR = 1.180, 95%CI: 0.690-2.010, P = 0.540; 2000 mg/d: OR = 1.530, 95%CI: 0.770-3.030, P = 0.230; 3000 mg/d: OR = 1.000, 95% CI: 0.620-1.600, P = 1.000. The following adverse events were associated with LEV: somnolence (OR = 1.720, 95%CI: 1.280-2.310, P = 0.000, dizziness (OR = 1.490, 95%CI: 1.000-2.220, P = 0.050, asthenia (OR = 1.670, 95%CI: 1.140-2.240, P = 0.008, nasopharyngitis (OR = 1.120, 95% CI: 0.710-1.760, P = 0.630, psychiatric and behavioral abnormalities (OR = 2.120, 95% CI: 1.370-3.280, P = 0.000. Conclusion LEV is effective and well tolerated when added to existing therapy in patients with refractory partial seizures compared with control drugs. Further studies are needed to identify the effects of monotherapy of LEV in partial seizures.
Nonlinear principal component analysis and its applications
Mori, Yuichi; Makino, Naomichi
2016-01-01
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...
Principal Component Analysis In Radar Polarimetry
Directory of Open Access Journals (Sweden)
A. Danklmayer
2005-01-01
Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.
Seismic Response Analysis of Continuous Multispan Bridges with Partial Isolation
Directory of Open Access Journals (Sweden)
E. Tubaldi
2015-01-01
Full Text Available Partially isolated bridges are a particular class of bridges in which isolation bearings are placed only between the piers top and the deck whereas seismic stoppers restrain the transverse motion of the deck at the abutments. This paper proposes an analytical formulation for the seismic analysis of these bridges, modelled as beams with intermediate viscoelastic restraints whose properties describe the pier-isolator behaviour. Different techniques are developed for solving the seismic problem. The first technique employs the complex mode superposition method and provides an exact benchmark solution to the problem at hand. The two other simplified techniques are based on an approximation of the displacement field and are useful for preliminary assessment and design purposes. A realistic bridge is considered as case study and its seismic response under a set of ground motion records is analyzed. First, the complex mode superposition method is applied to study the characteristic features of the dynamic and seismic response of the system. A parametric analysis is carried out to evaluate the influence of support stiffness and damping on the seismic performance. Then, a comparison is made between the exact solution and the approximate solutions in order to evaluate the accuracy and suitability of the simplified analysis techniques for evaluating the seismic response of partially isolated bridges.
Dependence of partial molecules surface area on the third component in lyotropic liquid crystals
International Nuclear Information System (INIS)
Badalyan, H.G.; Ghazaryan, Kh.M.; Yayloyan, S.M.
2015-01-01
Free surface of one amphiphilic molecule head of a lyotropic liquid crystal has been investigated by X-Ray diffraction method, at small and large angles, in the presence of the third component. The pentadecilsulphonat-water system in the presence of cholesterol as well as the lecithin-water system in the presence of decanol were investigated. It is shown that the above mentioned free surface decreases if the cholesterol concentration increases, while this surface increases in the case of water concentration increase. However, it increases slower than in the case of the two-component system. The same is observed for the lecithin-water-decanol system
Kou, Jisheng; Sun, Shuyu
2017-01-01
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive
Component fragilities - data collection, analysis and interpretation
International Nuclear Information System (INIS)
Bandyopadhyay, K.K.; Hofmayer, C.H.
1986-01-01
As part of the component fragility research program sponsored by the US Nuclear Regulatory Commission, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment, by identifying, collecting and analyzing existing test data from various sources. BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices of various manufacturers and models. Through a cooperative agreement, BNL has also obtained test data from EPRI/ANCO. An analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. An extensive amount of additional fragility or high level test data exists. If completely collected and properly analyzed, the entire data bank is expected to greatly reduce the need for additional testing to establish fragility levels for most equipment
Component fragilities. Data collection, analysis and interpretation
International Nuclear Information System (INIS)
Bandyopadhyay, K.K.; Hofmayer, C.H.
1985-01-01
As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi
2015-01-02
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
Kou, Jisheng
2016-05-10
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests
Economical analysis of the second partial reload for Angra 1 with partial low-leakage
International Nuclear Information System (INIS)
Mascarenhas, H.A.; Teixeira, M.C.C.; Dias, A.M.
1990-01-01
Preliminary results for the Angra 1 second reload design with partial low-leakage were assessed with NUCOST 1.0, code for nuclear power costs calculation. In the proposed scheme, some partially burned fuel assemblies (FAs) are located at the core boundary, while new FAs occupy more internal positions. The nuclear design - utilizing the code system SAV (from Siemens/KWU Group, F.R. Germany) - has been performed with detail for the 3rd cycle while simpler approach has been utilized for subsequent reloads. Results of NUCOST 1.0 show that the partial low-leakage reload in the 3rd cycle of Angra 1 offers fuel costs 1% lower when compared to the Plant's actual reload scheme, what corresponds to an savings of about US$190.000. When operation and maintenance and capital costs are also considered, economies in the order of US$2.6 million are obrained. (author) [pt
Vibration analysis of partially cracked plate submerged in fluid
Soni, Shashank; Jain, N. K.; Joshi, P. V.
2018-01-01
The present work proposes an analytical model for vibration analysis of partially cracked rectangular plates coupled with fluid medium. The governing equation of motion for the isotropic plate based on the classical plate theory is modified to accommodate a part through continuous line crack according to simplified line spring model. The influence of surrounding fluid medium is incorporated in the governing equation in the form of inertia effects based on velocity potential function and Bernoulli's equations. Both partially and totally submerged plate configurations are considered. The governing equation also considers the in-plane stretching due to lateral deflection in the form of in-plane forces which introduces geometric non-linearity into the system. The fundamental frequencies are evaluated by expressing the lateral deflection in terms of modal functions. The assessment of the present results is carried out for intact submerged plate as to the best of the author's knowledge the literature lacks in analytical results for submerged cracked plates. New results for fundamental frequencies are presented as affected by crack length, fluid level, fluid density and immersed depth of plate. By employing the method of multiple scales, the frequency response and peak amplitude of the cracked structure is analyzed. The non-linear frequency response curves show the phenomenon of bending hardening or softening and the effect of fluid dynamic pressure on the response of the cracked plate.
Poles of the Zagreb analysis partial-wave T matrices
Batinić, M.; Ceci, S.; Švarc, A.; Zauner, B.
2010-09-01
The Zagreb analysis partial-wave T matrices included in the Review of Particle Physics [by the Particle Data Group (PDG)] contain Breit-Wigner parameters only. As the advantages of pole over Breit-Wigner parameters in quantifying scattering matrix resonant states are becoming indisputable, we supplement the original solution with the pole parameters. Because of an already reported numeric error in the S11 analytic continuation [Batinić , Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.57.1004 57, 1004(E) (1997); arXiv:nucl-th/9703023], we declare the old BATINIC 95 solution, presently included by the PDG, invalid. Instead, we offer two new solutions: (A) corrected BATINIC 95 and (B) a new solution with an improved S11 πN elastic input. We endorse solution (B).
Group-wise Principal Component Analysis for Exploratory Data Analysis
Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.
2017-01-01
In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
Thermogravimetric analysis of combustible waste components
DEFF Research Database (Denmark)
Munther, Anette; Wu, Hao; Glarborg, Peter
In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...
Application of independent component analysis to H-1 MR spectroscopic imaging exams of brain tumours
Szabo de Edelenyi, F.; Simonetti, A.W.; Postma, G.; Huo, R.; Buydens, L.M.C.
2005-01-01
The low spatial resolution of clinical H-1 MRSI leads to partial volume effects. To overcome this problem, we applied independent component analysis (ICA) on a set of H-1 MRSI exams of brain turnours. With this method, tissue types that yield statistically independent spectra can be separated. Up to
Analysis of failed nuclear plant components
Diercks, D. R.
1993-12-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.
Analysis of failed nuclear plant components
International Nuclear Information System (INIS)
Diercks, D.R.
1993-01-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor
Analysis of failed nuclear plant components
International Nuclear Information System (INIS)
Diercks, D.R.
1992-07-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor
A radiographic analysis of implant component misfit.
LENUS (Irish Health Repository)
Sharkey, Seamus
2011-07-01
Radiographs are commonly used to assess the fit of implant components, but there is no clear agreement on the amount of misfit that can be detected by this method. This study investigated the effect of gap size and the relative angle at which a radiograph was taken on the detection of component misfit. Different types of implant connections (internal or external) and radiographic modalities (film or digital) were assessed.
Directory of Open Access Journals (Sweden)
Bhanupriya Dash
2017-09-01
Full Text Available Background: Replenishment policy for entropic order quantity model with two component demand and partial backlogging under inflation is an important subject in the stock management. Methods: In this paper an inventory model for non-instantaneous deteriorating items with stock dependant consumption rate and partial back logged in addition the effect of inflection and time value of money on replacement policy with zero lead time consider was developed. Profit maximization model is formulated by considering the effects of partial backlogging under inflation with cash discounts. Further numerical example presented to evaluate the relative performance between the entropic order quantity and EOQ models separately. Numerical example is present to demonstrate the developed model and to illustrate the procedure. Lingo 13.0 version software used to derive optimal order quantity and total cost of inventory. Finally sensitivity analysis of the optimal solution with respect to different parameters of the system carried out. Results and conclusions: The obtained inventory model is very useful in retail business. This model can extend to total backorder.
Lifetime analysis of fusion-reactor components
International Nuclear Information System (INIS)
Mattas, R.F.
1983-01-01
A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO
Mapping ash properties using principal components analysis
Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones
2017-04-01
In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2
Growth modeling of Cryptomeria japonica by partial trunk analysis
Directory of Open Access Journals (Sweden)
Vinícius Morais Coutinho
2017-06-01
Full Text Available This study aimed to evaluate the growth pattern of Cryptomeria japonica increment (L. F. D. Don. and to describe the probability distribution in stands stablished at the municipality of Rio Negro, Paraná State. Twenty trees were sampled in a 34 years-old stand, with 3 m x 2 m spacing. Wood disks were taken from each tree at 1.3 m above the ground (DBH to perform partial stem analysis. Diameter growth series without bark were used to generate the average cumulative growth curves for DBH (cm, mean annual increment (MAI and current annual increment (CAI. From the increment data, the frequency distribution was evaluated by means of probability density functions (pdfs. The mean annual increment for DBH was 0.78 cm year-1 and the age of intersection of CAI and MAI curves was between the 7th and 8th years. It was found that near 43% of the species increments are concentrated bellow 0.5 cm. The results are useful to define appropriate management strategies for the species for sites similar to the studying regions, defining for example ages of silvicultural intervention, such as thinning.
Principal component analysis of psoriasis lesions images
DEFF Research Database (Denmark)
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...
Survival analysis with functional covariates for partial follow-up studies.
Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming
2016-12-01
Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of
Buckling analysis of partially corroded steel plates with irregular ...
Indian Academy of Sciences (India)
Department of Ocean Engineering, AmirKabir University of Technology, ... could yield some acceptance criteria to assist surveyors or designers in repair and .... Finite element model of a partially both-sided corroded plate (shell elements).
EXAFS and principal component analysis : a new shell game
International Nuclear Information System (INIS)
Wasserman, S.
1998-01-01
The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions
Numerical Analysis for Stochastic Partial Differential Delay Equations with Jumps
Li, Yan; Hu, Junhao
2013-01-01
We investigate the convergence rate of Euler-Maruyama method for a class of stochastic partial differential delay equations driven by both Brownian motion and Poisson point processes. We discretize in space by a Galerkin method and in time by using a stochastic exponential integrator. We generalize some results of Bao et al. (2011) and Jacob et al. (2009) in finite dimensions to a class of stochastic partial differential delay equations with jumps in infinite dimensions.
Comparative videostroboscopic analysis after different external partial laryngectomies
Directory of Open Access Journals (Sweden)
Mumović Gordana M.
2014-01-01
Full Text Available Background/Aim. After external partial laryngectomias, videostroborscopy is very usefull in evaluation of postoperative phonatory mehanisms showing the “slow motion” of the vibrations of the remaining laryngeal structures. The aim of this paper was to compare the videostroboscopic characteristics of the vibration and to establish the differences in the phonation mechanisms depending on the type of external partial laryngectomy performed. Methods. This prospective study was conducted during the period 2003-2009 at the Ear, Nose and Throat Clinic, Clinical Center of Vojvodina, Novi Sad, including 99 patients with laryngeal carcinoma, treated with open surgical approach using different types of vertical and horizontal partial laryngectomy. Videostroboscopy was used to analyse vibrations of the remaining laryngeal structures. Results. The dominant vibration structure after partial horizontal laryngectomy, chordectomy, frontolateral laryngectomy and three quarter laryngectomy was the remaining vocal fold, after hemilaryngectomy it was the false vocal fold and after subtotal and near total laryngectomy it was the arythenoid. In patients with supracricoid hemilaryngopharyngectomy performed, many different structures were involved in the vibration. After most of the partial laryngectomies, vibrations can be found in the reconstructed part of the defect. In both horizontal and vertical partial laryngectomies movements of the larynx during phonation were mostly medial, while in cricohyoidoglottopexies they were anterior-posterior. Most of the operated patients (72.7% had insufficient occlusion of the neoglottis during the phonation. Conclusion. Videostroboscopy is a useful method in examining the phonation mechanisms of reconstructed laryngeal structures after partial laryngectomy as well as in planning postoperative voice therapy.
Directory of Open Access Journals (Sweden)
Radović Katarina
2010-01-01
Full Text Available Introduction. Various mobile devices are used in the therapy of unilateral free-end saddle. Unilateral dentures with precise connectivity elements are not used frequently. In this paper the problem of applying and functionality of unilateral freeend saddle denture without major connector was taken into consideration. Objective. The aim was to analyze and compare a unilateral RPD (removable partial denture and a classical RPD by calculating and analyzing stresses under different loads. Methods. 3D models of unilateral removable partial denture and classical removable partial denture with casted clasps were made by using computer program CATIA V5 (abutment teeth, canine and first premolar, with crowns and abutment tissues were also made. The models were built in full-scale. Stress analyses for both models were performed by applying a force of 300 N on the second premolar, a force of 500 N on the first molar and a force of 700 N on the second molar. Results. The Fault Model Extractor (FME analysis and calculation showed the complete behavior of unilateral removable partial denture and abutments (canine and first premolar, as well as the behavior of RPD under identical loading conditions. Applied forces with extreme values caused high stress levels on both models and their abutments within physiological limits. Conclusion. Having analyzed stresses under same conditions, we concluded that the unilateral RPD and classical RPD have similar physiological values.
Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition
Directory of Open Access Journals (Sweden)
Chang Liu
2014-01-01
Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.
Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y
1992-01-01
An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.
Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.
2010-01-01
The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social
Columbia River Component Data Gap Analysis
Energy Technology Data Exchange (ETDEWEB)
L. C. Hulstrom
2007-10-23
This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.
Use of Sparse Principal Component Analysis (SPCA) for Fault Detection
DEFF Research Database (Denmark)
Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet
2016-01-01
Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...
Three-dimensional finite element analysis of implant-assisted removable partial dentures.
Eom, Ju-Won; Lim, Young-Jun; Kim, Myung-Joo; Kwon, Ho-Beom
2017-06-01
Whether the implant abutment in implant-assisted removable partial dentures (IARPDs) functions as a natural removable partial denture (RPD) tooth abutment is unknown. The purpose of this 3-dimensional finite element study was to analyze the biomechanical behavior of implant crown, bone, RPD, and IARPD. Finite element models of the partial maxilla, teeth, and prostheses were generated on the basis of a patient's computed tomographic data. The teeth, surveyed crowns, and RPDs were created in the model. With the generated components, four 3-dimensional finite element models of the partial maxilla were constructed: tooth-supported RPD (TB), implant-supported RPD (IB), tooth-tissue-supported RPD (TT), and implant-tissue-supported RPD (IT) models. Oblique loading of 300 N was applied on the crowns and denture teeth. The von Mises stress and displacement of the denture abutment tooth and implant system were identified. The highest von Mises stress values of both IARPDs occurred on the implants, while those of both natural tooth RPDs occurred on the frameworks of the RPDs. The highest von Mises stress of model IT was about twice that of model IB, while the value of model TT was similar to that of model TB. The maximum displacement was greater in models TB and TT than in models IB and IT. Among the 4 models, the highest maximum displacement value was observed in the model TT and the lowest value was in the model IB. Finite element analysis revealed that the stress distribution pattern of the IARPDs was different from that of the natural tooth RPDs and the stress distribution of implant-supported RPD was different from that of implant-tissue-supported RPD. When implants are used for RPD abutments, more consideration concerning the RPD design and the number or location of the implant is necessary. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
PDASAC, Partial Differential Sensitivity Analysis of Stiff System
International Nuclear Information System (INIS)
Caracotsios, M.; Stewart, W.E.
2001-01-01
1 - Description of program or function: PDASAC solves stiff, nonlinear initial-boundary-value problems in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0, 1 or 2 in x. Parametric sensitivities of the calculated states are computed simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled. Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivities if desired. 2 - Method of solution: The method of lines is used, with a user- selected x-grid and a minimum-bandwidth finite-difference approximations of the x-derivatives. Starting conditions are reconciled with a damped Newton algorithm adapted from Bain and Stewart (1991). Initial step selection is done by the first-order algorithms of Shampine (1987), extended here to differential- algebraic equation systems. The solution is continued with the DASSL predictor-corrector algorithm (Petzold 1983, Brenan et al. 1989) with the initial acceleration phase deleted and with row scaling of the Jacobian added. The predictor and corrector are expressed in divided-difference form, with the fixed-leading-coefficient form of corrector (Jackson and Sacks-Davis 1989; Brenan et al. 1989). Weights for the error tests are updated in each step with the user's tolerances at the predicted state. Sensitivity analysis is performed directly on the corrector equations of Caracotsios and Stewart (1985) and is extended here to the initialization when needed. 3 - Restrictions on the complexity of the problem: This algorithm, like DASSL, performs well on differential-algebraic equation systems of index 0 and 1 but not on higher-index systems; see Brenan et al. (1989). The user assigned the work array lengths and the output
Projection and analysis of nuclear components
International Nuclear Information System (INIS)
Heeschen, U.
1980-01-01
The classification and the types of analysis carried out in pipings for quality control and safety of nuclear power plants, are presented. The operation and emergency conditions with emphasis of possible simplifications of calculations are described. (author/M.C.K.) [pt
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....
Nonparametric inference in nonlinear principal components analysis : exploration and beyond
Linting, Mariëlle
2007-01-01
In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),
Principal component analysis networks and algorithms
Kong, Xiangyu; Duan, Zhansheng
2017-01-01
This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.
Blind source separation dependent component analysis
Xiang, Yong; Yang, Zuyuan
2015-01-01
This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Real Time Engineering Analysis Based on a Generative Component Implementation
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning; Klitgaard, Jens
2007-01-01
The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Radiographic analysis of partial or total vertebral body resection
International Nuclear Information System (INIS)
Whitten, C.G.; Hammer, G.H.; El-Khoury, G.Y.; Hugus, J.; Weinstein, J.N.
1991-01-01
Partial and total vertebrectomies are used in the treatment of primary and metastatic neoplasms of the spine. Serial radiographic studies are crucial in the follow-up of patients with vertebrectomies. This paper presents 33 cases and illustrates radiographic examples of both successful and complicated vertebrectomies, including radiographic signs of local tumor recurrence, loosening, migration or fracture of the hardware or methylmethacrylate, bone graft failure, and progressive spinal instability
Partial wave analysis of ι/η(1430) from DM2
International Nuclear Information System (INIS)
Augustin, J.E.; Cosme, G.; Couchot, F.; Fulda, F.; Grosdidier, G.; Jean-Marie, B.; Lepeltier, V.; Mane, M.; Szklarz, G.; Jousset, J.; Ajaltouni, Z.; Falvard, A.; Michel, B.; Montret, J.C.
1989-12-01
A Partial Wave Analysis of the ι/η (1430) region from the study of the radiative decays J/Ψ → γ K S 0 K ± π -+ and J/Ψ → γ K ± K -+ π 0 is presented. Pseudoscalar dominance appears clearly with two dynamical components. The main one which proceeds via δ/a 0 (980) π is centered at 1460 MeV/c 2 , while the second one with K*(892) K dynamics is peaked at a lower mass (1420 MeV/c 2 ) close to its kinematical threshold. In addition, the higher part of the mass spectrum contains a significant contribution from the 1 ++ K*(892)K wave
Partial wave analysis of DM2 data in the η(1430) energy range
International Nuclear Information System (INIS)
Augustin, J.E.; Cosme, G.; Couchot, F.; Fulda, F.; Grosdidier, G.; Jean-Marie, B.; Lepeltier, V.; Szklarz, G.; Bisello, D.; Busetto, G.; Castro, A.; Pescara, L.; Sartori, P.; Stanco, L.; Ajaltouni, Z.; Falvard, A.; Jousset, J.; Michel, B.; Montret, J.C.
1990-10-01
Partial Wave Analysis of the J/ψ → γK S 0 K ± π -+ , γK ± K -+ π 0 decays in the ι/η(1430) mass range shows a clear pseudoscalar dominance, with two dynamical components. The main one, centered at ∼ 1460 MeV/c 2 , proceeds via a 0 (980)π dynamics, while the second one with K*(892)K dynamics is peaked at ∼ 1420 MeV/c 2 , close to its threshold. In addition, the higher part of the mass spectrum contains a significant contribution from the 1 ++ K*(892)K wave. In the PWA of the J/ψ → γηπ + π - channel a resonant a 0 π production is observed slightly below 1400 MeV/c 2
Problems of stress analysis of fuelling machine head components
International Nuclear Information System (INIS)
Mathur, D.D.
1975-01-01
The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)
Peng, Ying; Li, Su-Ning; Pei, Xuexue; Hao, Kun
2018-03-01
Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.
Directory of Open Access Journals (Sweden)
Ying Peng
2018-03-01
Full Text Available Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.
Component reliability analysis for development of component reliability DB of Korean standard NPPs
International Nuclear Information System (INIS)
Choi, S. Y.; Han, S. H.; Kim, S. H.
2002-01-01
The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs
Kou, Jisheng
2016-11-25
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.
Direct Calculation of the Scattering Amplitude Without Partial Wave Analysis
Shertzer, J.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Two new developments in scattering theory are reported. We show, in a practical way, how one can calculate the full scattering amplitude without invoking a partial wave expansion. First, the integral expression for the scattering amplitude f(theta) is simplified by an analytic integration over the azimuthal angle. Second, the full scattering wavefunction which appears in the integral expression for f(theta) is obtained by solving the Schrodinger equation with the finite element method (FEM). As an example, we calculate electron scattering from the Hartree potential. With minimal computational effort, we obtain accurate and stable results for the scattering amplitude.
Principal Component Analysis of Body Measurements In Three ...
African Journals Online (AJOL)
This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.
Tomato sorting using independent component analysis on spectral images
Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.
2003-01-01
Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components
Baryon Spectroscopy Through Partial-Wave Analysis and Meson Photoproduction
International Nuclear Information System (INIS)
Manley, D. Mark
2016-01-01
The principal goal of this project is the experimental and phenomenological study of baryon spectroscopy. The PI's group consists of himself and three graduate students. This final report summarizes research activities by the PI's group during the period 03/01/2015 to 08/14/2016. During this period, the PI co-authored 11 published journal papers and one proceedings article and presented three invited talks. The PI's general interest is the investigation of the baryon resonance spectrum up to masses of ~ 2 GeV. More detail is given on two research projects: Neutral Kaon Photoproduction and Partial-Wave Analyses of γp → η p, γn → η n, and γp → K"+ Λ.
Baryon Spectroscopy Through Partial-Wave Analysis and Meson Photoproduction
Energy Technology Data Exchange (ETDEWEB)
Manley, D. Mark [Kent State Univ., Kent, OH (United States)
2016-09-08
The principal goal of this project is the experimental and phenomenological study of baryon spectroscopy. The PI's group consists of himself and three graduate students. This final report summarizes research activities by the PI's group during the period 03/01/2015 to 08/14/2016. During this period, the PI co-authored 11 published journal papers and one proceedings article and presented three invited talks. The PI's general interest is the investigation of the baryon resonance spectrum up to masses of ~ 2 GeV. More detail is given on two research projects: Neutral Kaon Photoproduction and Partial-Wave Analyses of γp → η p, γn → η n, and γp → K⁺ Λ.
Contrast analysis of the partial splenic artery embolization with splenectomy
International Nuclear Information System (INIS)
Lu Wusheng; He Qing; Zheng Zhiyong; Wu Shaoping; Xu Dawei
2006-01-01
Objective: To analyze the effects and the complications of partial splenic artery embolization (PSE) and splenectomy offering a feasible way to choose different therapeutic methods for hypersplenism. Methods: Forty-six patients treated with PSE and thirty-three undergone splenectomy were compared for their effectivenesses and complications in treating hypersplenism. Results: Thrombocyte and leucocyte counts increased markedly after the two kinds of treatment (P 0.05). The complication rate of the PSE was far more than that of the splenectomy (P<0.001). Conclusions Splenectomy is prior to PSE on patients with large mount of ascites, serious portal hypertension and splenomegaly. PSE is suitable for patients with poor liver function, blood coagulation disturbance, liver cancer complicated with hypersplenism and aging. (authors)
Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.
Muraki, Eiji
1999-01-01
Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…
Key components of financial-analysis education for clinical nurses.
Lim, Ji Young; Noh, Wonjung
2015-09-01
In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.
Dynamic Modal Analysis of Vertical Machining Centre Components
Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh
2009-01-01
The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...
Partial wave analysis for folded differential cross sections
Machacek, J. R.; McEachran, R. P.
2018-03-01
The value of modified effective range theory (MERT) and the connection between differential cross sections and phase shifts in low-energy electron scattering has long been recognized. Recent experimental techniques involving magnetically confined beams have introduced the concept of folded differential cross sections (FDCS) where the forward (θ ≤ π/2) and backward scattered (θ ≥ π/2) projectiles are unresolved, that is the value measured at the angle θ is the sum of the signal for particles scattered into the angles θ and π - θ. We have developed an alternative approach to MERT in order to analyse low-energy folded differential cross sections for positrons and electrons. This results in a simplified expression for the FDCS when it is expressed in terms of partial waves and thereby enables one to extract the first few phase shifts from a fit to an experimental FDCS at low energies. Thus, this method predicts forward and backward angle scattering (0 to π) using only experimental FDCS data and can be used to determine the total elastic cross section solely from experimental results at low-energy, which are limited in angular range.
On the structure of dynamic principal component analysis used in statistical process monitoring
DEFF Research Database (Denmark)
Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne
2017-01-01
When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Mathematical analysis of partial differential equations modeling electrostatic MEMS
Esposito, Pierpaolo; Guo, Yujin
2010-01-01
Micro- and nanoelectromechanical systems (MEMS and NEMS), which combine electronics with miniature-size mechanical devices, are essential components of modern technology. It is the mathematical model describing "electrostatically actuated" MEMS that is addressed in this monograph. Even the simplified models that the authors deal with still lead to very interesting second- and fourth-order nonlinear elliptic equations (in the stationary case) and to nonlinear parabolic equations (in the dynamic case). While nonlinear eigenvalue problems-where the stationary MEMS models fit-are a well-developed
Directory of Open Access Journals (Sweden)
A. Bouhassan
2004-12-01
Full Text Available In detached leaf tests on faba bean (Vicia faba L., genotypes partially resistant and susceptible to Botrytis fabae were examined. Expression of four components of partial resistance to a virulent isolate of B. fabae differed depending on the plant age and the leaf age of the genotypes. The incubation period of resistant genotypes at the podding stage was longer than that of susceptible genotypes at the same stage. The area under disease progress curve (AUDPC of the lesion size increased from the seedling to the flowering stage but declined at the podding stage in all genotypes. Differences between resistant and susceptible genotypes for lesion size were significant except on old leaves from plants at the podding stage. The latent period decreased, and spore production increased with increasing growth and leaf age but there was significant interaction with the genotype. These last two components of partial resistance were more clearly expressed at all growth stages on FRY167 (highly resistant but were expressed only at the seedling and podding stages on FRY7 (resistant. The resistant line BPL710 was not significantly different from the susceptible genotypes for the latent period at any growth stage, and for spore production at the seedling and flowering stages. Leaf age affected all genotypes, but with a significant interaction between leaf age and growth stage. Components of partial resistance were more strongly expressed on young leaves from plants at the seedling or flowering stage.
System diagnostics using qualitative analysis and component functional classification
International Nuclear Information System (INIS)
Reifman, J.; Wei, T.Y.C.
1993-01-01
A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures
Multistage principal component analysis based method for abdominal ECG decomposition
International Nuclear Information System (INIS)
Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas
2015-01-01
Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)
Analysis of partial and total inelasticities obtained from inclusive reactions
International Nuclear Information System (INIS)
Bellandi, J.; Covolan, R.; Costa, C.G.; Montanha, J.; Mundim, L.M.
1994-01-01
An independent analysis of model for energetic dependence on inelasticity is presented, from experimental data of pp → c X (c = π +- , Κ +- , p +- ) type inclusive reactions. 6 refs., 2 figs., 1 tab
Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index
Directory of Open Access Journals (Sweden)
Zhiliang Wang
2014-01-01
Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.
Sparse Principal Component Analysis in Medical Shape Modeling
DEFF Research Database (Denmark)
Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus
2006-01-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...
Efficacy of the Principal Components Analysis Techniques Using ...
African Journals Online (AJOL)
Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis
Directory of Open Access Journals (Sweden)
CHEN, R.
2017-11-01
Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.
Clinical usefulness of physiological components obtained by factor analysis
International Nuclear Information System (INIS)
Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.
1989-01-01
The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)
Transient Analysis of Monopile Foundations Partially Embedded in Liquefied Soil
DEFF Research Database (Denmark)
Barari, Amin; Bayat, Mehdi; Meysam, Saadati
2015-01-01
Lagrangian Analysis of Continua (FLAC), which captured the fundamental mechanisms of the monopiles in saturated granular soil. The effects of inertia and the kinematic flow of soil are investigated separately, to highlight the importance of considering the combined effect of these phenomena on the seismic...
Partial Derivative Games in Thermodynamics: A Cognitive Task Analysis
Kustusch, Mary Bridget; Roundy, David; Dray, Tevian; Manogue, Corinne A.
2014-01-01
Several studies in recent years have demonstrated that upper-division students struggle with the mathematics of thermodynamics. This paper presents a task analysis based on several expert attempts to solve a challenging mathematics problem in thermodynamics. The purpose of this paper is twofold. First, we highlight the importance of cognitive task…
Independent component analysis based filtering for penumbral imaging
International Nuclear Information System (INIS)
Chen Yenwei; Han Xianhua; Nozaki, Shinya
2004-01-01
We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters
Numerical analysis of magnetoelastic coupled buckling of fusion reactor components
International Nuclear Information System (INIS)
Demachi, K.; Yoshida, Y.; Miya, K.
1994-01-01
For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated
Computer compensation for NMR quantitative analysis of trace components
International Nuclear Information System (INIS)
Nakayama, T.; Fujiwara, Y.
1981-01-01
A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA
Multi-component separation and analysis of bat echolocation calls.
DiCecco, John; Gaudette, Jason E; Simmons, James A
2013-01-01
The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.
Condition monitoring with Mean field independent components analysis
DEFF Research Database (Denmark)
Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan
2005-01-01
We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...
Independent component analysis for automatic note extraction from musical trills
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
Signal-dependent independent component analysis by tunable mother wavelets
International Nuclear Information System (INIS)
Seo, Kyung Ho
2006-02-01
The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown
Automatic ECG analysis using principal component analysis and wavelet transformation
Khawaja, Antoun
2007-01-01
The main objective of this book is to analyse and detect small changes in ECG waves and complexes that indicate cardiac diseases and disorders. Detecting predisposition to Torsade de Points (TDP) by analysing the beat-to-beat variability in T wave morphology is the main core of this work. The second main topic is detecting small changes in QRS complex and predicting future QRS complexes of patients. Moreover, the last main topic is clustering similar ECG components in different groups.
Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N
2018-02-01
The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not
Partial differential equations with variable exponents variational methods and qualitative analysis
Radulescu, Vicentiu D
2015-01-01
Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth
Fatigue Reliability Analysis of Wind Turbine Cast Components
DEFF Research Database (Denmark)
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren
2017-01-01
.) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...
Independent component analysis in non-hypothesis driven metabolomics
DEFF Research Database (Denmark)
Li, Xiang; Hansen, Jakob; Zhao, Xinjie
2012-01-01
In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...
Analysis methods for structure reliability of piping components
International Nuclear Information System (INIS)
Schimpfke, T.; Grebner, H.; Sievers, J.
2004-01-01
In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)
The analysis of multivariate group differences using common principal components
Bechger, T.M.; Blanca, M.J.; Maris, G.
2014-01-01
Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences
Principal Component Analysis: Most Favourite Tool in Chemometrics
Indian Academy of Sciences (India)
Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.
Scalable Robust Principal Component Analysis Using Grassmann Averages
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi
2016-01-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...
Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components
DEFF Research Database (Denmark)
Berzonskis, Arvydas; Sørensen, John Dalsgaard
2016-01-01
in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....
Principal component analysis of image gradient orientations for face recognition
Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data
Adaptive tools in virtual environments: Independent component analysis for multimedia
DEFF Research Database (Denmark)
Kolenda, Thomas
2002-01-01
The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....
Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen
2016-01-01
Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case-control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60-0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention.
Calculation of partial derivatives of thermophysical properties of sodium for safety analysis
International Nuclear Information System (INIS)
Shan Jianqiang; Qiu Suizhang; Zhu Jizhou; Zhang Guiqin
1997-01-01
According to the characters of safety analysis of LMFBR, the partial derivatives formula of some special thermophysical properties of sodium, including single-and two-phase properties, are calculated based on the basic Maxwell equations, and on the formulae of basic thermophysical properties of sodium which were verified abroad. The present study can provide theoretical base for safety analysis of LMFBR
PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS
Directory of Open Access Journals (Sweden)
Kartika Gunadi
2001-01-01
Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang
Principal Component Analysis - A Powerful Tool in Computing Marketing Information
Directory of Open Access Journals (Sweden)
Constantin C.
2014-12-01
Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.
Experimental modal analysis of components of the LHC experiments
Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A
2007-01-01
Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.
Reliability Analysis of Fatigue Failure of Cast Components for Wind Turbines
Directory of Open Access Journals (Sweden)
Hesam Mirzaei Rafsanjani
2015-04-01
Full Text Available Fatigue failure is one of the main failure modes for wind turbine drivetrain components made of cast iron. The wind turbine drivetrain consists of a variety of heavily loaded components, like the main shaft, the main bearings, the gearbox and the generator. The failure of each component will lead to substantial economic losses such as cost of lost energy production and cost of repairs. During the design lifetime, the drivetrain components are exposed to variable loads from winds and waves and other sources of loads that are uncertain and have to be modeled as stochastic variables. The types of loads are different for offshore and onshore wind turbines. Moreover, uncertainties about the fatigue strength play an important role in modeling and assessment of the reliability of the components. In this paper, a generic stochastic model for fatigue failure of cast iron components based on fatigue test data and a limit state equation for fatigue failure based on the SN-curve approach and Miner’s rule is presented. The statistical analysis of the fatigue data is performed using the Maximum Likelihood Method which also gives an estimate of the statistical uncertainties. Finally, illustrative examples are presented with reliability analyses depending on various stochastic models and partial safety factors.
APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES
International Nuclear Information System (INIS)
STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.
1999-01-01
Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content
Kou, Jisheng; Sun, Shuyu
2016-01-01
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest
Sparse logistic principal components analysis for binary data
Lee, Seokho
2010-09-01
We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.
Components of Program for Analysis of Spectra and Their Testing
Directory of Open Access Journals (Sweden)
Ivan Taufer
2013-11-01
Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.
Directory of Open Access Journals (Sweden)
Rodrigo F Ramos
2011-12-01
Full Text Available CONTEXT: Although the high incidence of gastroesophageal reflux disease (GERD in the population, there is much controversy in this topic, especially in the surgical treatment. The decision to use of a total or partial fundoplication in the treatment of GERD is still a challenge to many surgeons because the few evidence found in the literature. OBJECTIVE: To bring more clear evidence in the comparison between total and partial fundoplication. DATA SOURCES: A systematic review of the literature and metaanalysis with randomized controlled trials accessed from MEDLINE, LILACS, Cochrane Controlled Trials Database was done. The outcomes remarked were: dysphagia, inability to belch, bloating, recurrence of acid reflux, heartburn and esophagitis. For data analysis the odds ratio was used with corresponding 95% confidence interval. Statistical heterogeneity in the results of the metaanalysis was assessed by calculating a test of heterogeneity. The software Review Manager 5 (Cochrane Collaboration was utilized for the data gathered and the statistical analysis. Sensitive analysis was applied using only trials that included follow-up over 2 years. RESULTS: Ten trials were included with 1003 patients: 502 to total fundoplication group and 501 to partial fundoplication group. The outcomes dysphagia and inability to belch had statistical significant difference (P = 0.00001 in favor of partial fundoplication. There was not statistical difference in outcomes related with treatment failure. There were no heterogeneity in the outcomes dysphagia and recurrence of the acid reflux. CONCLUSION: The partial fundoplication has lower incidence of obstructive side effects.
Optimization benefits analysis in production process of fabrication components
Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.
2017-12-01
The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.
Probabilistic structural analysis methods for select space propulsion system components
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Fetal source extraction from magnetocardiographic recordings by dependent component analysis
Energy Technology Data Exchange (ETDEWEB)
Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)
2005-10-07
Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Saccenti, E.; Camacho, J.
2015-01-01
Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several
Research on Air Quality Evaluation based on Principal Component Analysis
Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan
2018-01-01
Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.
Efficient training of multilayer perceptrons using principal component analysis
International Nuclear Information System (INIS)
Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael
2005-01-01
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior
Pond, Mark J.; Errington, Jeffrey R.; Truskett, Thomas M.
2011-09-01
Partial pair-correlation functions of colloidal suspensions with continuous polydispersity can be challenging to characterize from optical microscopy or computer simulation data due to inadequate sampling. As a result, it is common to adopt an effective one-component description of the structure that ignores the differences between particle types. Unfortunately, whether this kind of simplified description preserves or averages out information important for understanding the behavior of the fluid depends on the degree of polydispersity and can be difficult to assess, especially when the corresponding multicomponent description of the pair correlations is unavailable for comparison. Here, we present a computer simulation study that examines the implications of adopting an effective one-component structural description of a polydisperse fluid. The square-well model that we investigate mimics key aspects of the experimental behavior of suspended colloids with short-range, polymer-mediated attractions. To characterize the partial pair-correlation functions and thermodynamic excess entropy of this system, we introduce a Monte Carlo sampling strategy appropriate for fluids with a large number of pseudo-components. The data from our simulations at high particle concentrations, as well as exact theoretical results for dilute systems, show how qualitatively different trends between structural order and particle attractions emerge from the multicomponent and effective one-component treatments, even with systems characterized by moderate polydispersity. We examine consequences of these differences for excess-entropy based scalings of shear viscosity, and we discuss how use of the multicomponent treatment reveals similarities between the corresponding dynamic scaling behaviors of attractive colloids and liquid water that the effective one-component analysis does not capture.
The analysis of linear partial differential operators I distribution theory and Fourier analysis
Hörmander, Lars
2003-01-01
The main change in this edition is the inclusion of exercises with answers and hints. This is meant to emphasize that this volume has been written as a general course in modern analysis on a graduate student level and not only as the beginning of a specialized course in partial differen tial equations. In particular, it could also serve as an introduction to harmonic analysis. Exercises are given primarily to the sections of gen eral interest; there are none to the last two chapters. Most of the exercises are just routine problems meant to give some familiarity with standard use of the tools introduced in the text. Others are extensions of the theory presented there. As a rule rather complete though brief solutions are then given in the answers and hints. To a large extent the exercises have been taken over from courses or examinations given by Anders Melin or myself at the University of Lund. I am grateful to Anders Melin for letting me use the problems originating from him and for numerous valuable comm...
Functional analytic methods in complex analysis and applications to partial differential equations
International Nuclear Information System (INIS)
Mshimba, A.S.A.; Tutschke, W.
1990-01-01
The volume contains 24 lectures given at the Workshop on Functional Analytic Methods in Complex Analysis and Applications to Partial Differential Equations held in Trieste, Italy, between 8-19 February 1988, at the ICTP. A separate abstract was prepared for each of these lectures. Refs and figs
Logical Specification and Analysis of Fault Tolerant Systems through Partial Model Checking
Gnesi, S.; Etalle, Sandro; Mukhopadhyay, S.; Lenzini, Gabriele; Lenzini, G.; Martinelli, F.; Roychoudhury, A.
2003-01-01
This paper presents a framework for a logical characterisation of fault tolerance and its formal analysis based on partial model checking techniques. The framework requires a fault tolerant system to be modelled using a formal calculus, here the CCS process algebra. To this aim we propose a uniform
Comparative study of various normal mode analysis techniques based on partial Hessians.
Ghysels, An; Van Speybroeck, Veronique; Pauwels, Ewald; Catak, Saron; Brooks, Bernard R; Van Neck, Dimitri; Waroquier, Michel
2010-04-15
Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and application field but guidelines for the most suitable choice are lacking. We have investigated several partial Hessian methods, including the Partial Hessian Vibrational Analysis (PHVA), the Mobile Block Hessian (MBH), and the Vibrational Subsystem Analysis (VSA). In this article, we focus on the benefits and drawbacks of these methods, in terms of the reproduction of localized modes, collective modes, and the performance in partially optimized structures. We find that the PHVA is suitable for describing localized modes, that the MBH not only reproduces localized and global modes but also serves as an analysis tool of the spectrum, and that the VSA is mostly useful for the reproduction of the low frequency spectrum. These guidelines are illustrated with the reproduction of the localized amine-stretch, the spectrum of quinine and a bis-cinchona derivative, and the low frequency modes of the LAO binding protein. 2009 Wiley Periodicals, Inc.
Dynamic analysis of the radiolysis of binary component system
International Nuclear Information System (INIS)
Katayama, M.; Trumbore, C.N.
1975-01-01
Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)
A component analysis of positive behaviour support plans.
McClean, Brian; Grey, Ian
2012-09-01
Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.
Representation for dialect recognition using topographic independent component analysis
Wei, Qu
2004-10-01
In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.
Probabilistic methods in nuclear power plant component ageing analysis
International Nuclear Information System (INIS)
Simola, K.
1992-03-01
The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience
Development of component failure data for seismic risk analysis
International Nuclear Information System (INIS)
Fray, R.R.; Moulia, T.A.
1981-01-01
This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)
International Nuclear Information System (INIS)
McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.
1992-01-01
The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)
Reliability Analysis and Calibration of Partial Safety Factors for Redundant Structures
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
1998-01-01
Redundancy is important to include in the design and analysis of structural systems. In most codes of practice redundancy is not directly taken into account. In the paper various definitions of a deterministic and reliability based redundancy measure are reviewed. It is described how reundancy can...... be included in the safety system and how partial safety factors can be calibrated. An example is presented illustrating how redundancy is taken into account in the safety system in e.g. the Danish codes. The example shows how partial safety factors can be calibrated to comply with the safety level...
Schiesser, William E
2014-01-01
Features a solid foundation of mathematical and computational tools to formulate and solve real-world PDE problems across various fields With a step-by-step approach to solving partial differential equations (PDEs), Differential Equation Analysis in Biomedical Science and Engineering: Partial Differential Equation Applications with R successfully applies computational techniques for solving real-world PDE problems that are found in a variety of fields, including chemistry, physics, biology, and physiology. The book provides readers with the necessary knowledge to reproduce and extend the com
Integrative sparse principal component analysis of gene expression data.
Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge
2017-12-01
In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.
Source Signals Separation and Reconstruction Following Principal Component Analysis
Directory of Open Access Journals (Sweden)
WANG Cheng
2014-02-01
Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.
International Nuclear Information System (INIS)
Boccard, Julien; Rudaz, Serge
2016-01-01
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of experiments
Energy Technology Data Exchange (ETDEWEB)
Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of
Prestudy - Development of trend analysis of component failure
International Nuclear Information System (INIS)
Poern, K.
1995-04-01
The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs
An analysis of the nucleon spectrum from lattice partially-quenched QCD
Energy Technology Data Exchange (ETDEWEB)
Armour, W. [Swansea University, Swansea, SA2 8PP, Wales, U.K.; Allton, C. R. [Swansea University, Swansea, SA2 8PP, Wales, U.K.; Leinweber, Derek B. [Univ. of Adelaide, SA (Australia); Thomas, Anthony W. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Young, Ross D. [Argonne National Lab. (ANL), Argonne, IL (United States)
2010-09-01
The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.
Aeromagnetic Compensation Algorithm Based on Principal Component Analysis
Directory of Open Access Journals (Sweden)
Peilin Wu
2018-01-01
Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.
Fast principal component analysis for stacking seismic data
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Demixed principal component analysis of neural population data.
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-04-12
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.
Multigroup Moderation Test in Generalized Structured Component Analysis
Directory of Open Access Journals (Sweden)
Angga Dwi Mulyanto
2016-05-01
Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.
International Nuclear Information System (INIS)
Wollmerstädt, J.; Sharma, S.C.; Marsch, M.
1992-01-01
Means for quantifying dendromass components of spruce stands have been discussed, and partial models for modification of radiation and wind by the pure spruce stand were developed. By means of a sampling procedure, the components needle dry mass and branchwood dry mass without needles of individual trees are recorded. Using the relationship between branch basal diameter and needle respectively branchwood dry mass, the total needle and branchwood dry mass of trees is estimated. Based on that, stand or regional parameters for the allometric function between diameter breast height and needle respectively branchwood dry mass can be determined for defined H/D-clusters. Published data from various sources were used in this paper. The lowest coefficients of determination were found in H/D-cluster 120 (H/D-values over 114). Therefore, further differentiation within this range seems to be necessary. For assimilation models, there should be quantification of needle dry mass separately for needle age classes and morphological characteristics of needles. Basis for the estimate of tree-bole volume is the relationship between H/D-value and oven-dry weight. There are problems as far as methods for quantifying the subterranean dendromass (e.g. dynamics of fine roots) are concerned; this is requiring considerable efforts, too. Spatial structure was also described by allometric functions (crown length and crown cover in relation to diameter breast height). For the partial model to express wind modification by the stand, standardized wind profiles as related to crown canopy density were used. The modification of radiation by the stand is closely related with the vertical needle mass distribution (sum curves). These two partial models have to be considered as an approach for the description of the modifying effect by the stocking [de
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Robustness analysis of bogie suspension components Pareto optimised values
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Sparse principal component analysis in medical shape modeling
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Protein structure similarity from principle component correlation analysis
Directory of Open Access Journals (Sweden)
Chou James
2006-01-01
Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum
Principal component analysis of FDG PET in amnestic MCI
International Nuclear Information System (INIS)
Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco
2008-01-01
The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)
Principal component analysis of FDG PET in amnestic MCI
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)
2008-12-15
The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)
DEFF Research Database (Denmark)
Ilic, C; Chadwick, A; Helm-Petersen, Jacob
2000-01-01
, non-phased locked methods are more appropriate. In this paper, the accuracy of two non-phased locked methods of directional analysis, the maximum likelihood method (MLM) and the Bayesian directional method (BDM) have been quantitatively evaluated using numerical simulations for the case...... of multidirectional waves with partial reflections. It is shown that the results are influenced by the ratio of distance from the reflector (L) to the length of the time series (S) used in the spectral analysis. Both methods are found to be capable of determining the incident and reflective wave fields when US > 0......Recent studies of advanced directional analysis techniques have mainly centred on incident wave fields. In the study of coastal structures, however, partially reflective wave fields are commonly present. In the near structure field, phase locked methods can be successfully applied. In the far field...
Nuclear analysis techniques as a component of thermoluminescence dating
Energy Technology Data Exchange (ETDEWEB)
Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)
1996-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Principal Component Analysis Based Measure of Structural Holes
Deng, Shiguo; Zhang, Wenqing; Yang, Huijie
2013-02-01
Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.
Fast and accurate methods of independent component analysis: A survey
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Koldovský, Zbyněk
2011-01-01
Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf
PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS
Directory of Open Access Journals (Sweden)
Hermita Bus Umar
2009-03-01
Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.
Fetal ECG extraction using independent component analysis by Jade approach
Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian
2017-11-01
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
Nuclear analysis techniques as a component of thermoluminescence dating
Energy Technology Data Exchange (ETDEWEB)
Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)
1997-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Nonlinear Principal Component Analysis Using Strong Tracking Filter
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.
Structure analysis of active components of traditional Chinese medicines
DEFF Research Database (Denmark)
Zhang, Wei; Sun, Qinglei; Liu, Jianhua
2013-01-01
Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... herbal sources by aqueous or alcoholic solvents. Therefore, it is difficult to correlate the pharmaceutical effect to a specific lead compound in the TCMs. A detailed analysis of various components in TCMs has been a great challenge for modern analytical techniques in recent decades. In this chapter...
Advances in independent component analysis and learning machines
Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko
2015-01-01
In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t
Cross coherence independent component analysis in resting and action states EEG discrimination
International Nuclear Information System (INIS)
Almurshedi, A; Ismail, A K
2014-01-01
Cross Coherence time frequency transform and independent component analysis (ICA) method were used to analyse the electroencephalogram (EEG) signals in resting and action states during open and close eyes conditions. From the topographical scalp distributions of delta, theta, alpha, and beta power spectrum can clearly discriminate between the signal when the eyes were open or closed, but it was difficult to distinguish between resting and action states when the eyes were closed. In open eyes condition, the frontal area (Fp1, Fp2) was activated (higher power) in delta and theta bands whilst occipital (O1, O2) and partial (P3, P4, Pz) area of brain was activated alpha band in closed eyes condition. The cross coherence method of time frequency analysis is capable of discrimination between rest and action brain signals in closed eyes condition
Kenett, Dror Y; Tumminello, Michele; Madi, Asaf; Gur-Gershgoren, Gitit; Mantegna, Rosario N; Ben-Jacob, Eshel
2010-12-20
What are the dominant stocks which drive the correlations present among stocks traded in a stock market? Can a correlation analysis provide an answer to this question? In the past, correlation based networks have been proposed as a tool to uncover the underlying backbone of the market. Correlation based networks represent the stocks and their relationships, which are then investigated using different network theory methodologies. Here we introduce a new concept to tackle the above question--the partial correlation network. Partial correlation is a measure of how the correlation between two variables, e.g., stock returns, is affected by a third variable. By using it we define a proxy of stock influence, which is then used to construct partial correlation networks. The empirical part of this study is performed on a specific financial system, namely the set of 300 highly capitalized stocks traded at the New York Stock Exchange, in the time period 2001-2003. By constructing the partial correlation network, unlike the case of standard correlation based networks, we find that stocks belonging to the financial sector and, in particular, to the investment services sub-sector, are the most influential stocks affecting the correlation profile of the system. Using a moving window analysis, we find that the strong influence of the financial stocks is conserved across time for the investigated trading period. Our findings shed a new light on the underlying mechanisms and driving forces controlling the correlation profile observed in a financial market.
Development of motion image prediction method using principal component analysis
International Nuclear Information System (INIS)
Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma
2012-01-01
Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)
Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.
Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J
2018-03-01
Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.
Independent component analysis classification of laser induced breakdown spectroscopy spectra
International Nuclear Information System (INIS)
Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie
2013-01-01
The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes
Energy Technology Data Exchange (ETDEWEB)
Andrews, Nathan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Faucett, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Haskin, Troy Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Luxat, Dave [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Geiger, Garrett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Codella, Brittany [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-10-01
Following the conclusion of the first phase of the crosswalk analysis, one of the key unanswered questions was whether or not the deviations found would persist during a partially recovered accident scenario, similar to the one that occurred in TMI - 2. In particular this analysis aims to compare the impact of core degradation morphology on quenching models inherent within the two codes and the coolability of debris during partially recovered accidents. A primary motivation for this study is the development of insights into how uncertainties in core damage progression models impact the ability to assess the potential for recovery of a degraded core. These quench and core recovery models are of the most interest when there is a significant amount of core damage, but intact and degraded fuel still remain in the cor e region or the lower plenum. Accordingly this analysis presents a spectrum of partially recovered accident scenarios by varying both water injection timing and rate to highlight the impact of core degradation phenomena on recovered accident scenarios. This analysis uses the newly released MELCOR 2.2 rev. 966 5 and MAAP5, Version 5.04. These code versions, which incorporate a significant number of modifications that have been driven by analyses and forensic evidence obtained from the Fukushima - Daiichi reactor site.
International Nuclear Information System (INIS)
Zeng, J.; Li, G.; Sun, J.
2013-01-01
Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)
Tang, Kwong-Tin
2007-01-01
Pedagogical insights gained through 30 years of teaching applied mathematics led the author to write this set of student oriented books. Topics such as complex analysis, matrix theory, vector and tensor analysis, Fourier analysis, integral transforms, ordinary and partial differential equations are presented in a discursive style that is readable and easy to follow. Numerous clearly stated, completely worked out examples together with carefully selected problem sets with answers are used to enhance students' understanding and manipulative skill. The goal is to make students comfortable and confident in using advanced mathematical tools in junior, senior, and beginning graduate courses.
International Nuclear Information System (INIS)
Nishimura, Norio; Tomiyama, Kohei; Doke, Noriyuki
1980-01-01
The zoosporial component of Phytophthora infestans, which was previously reported to cause reduction of 3 H-leucine uptake by potato tuber disks, was partially purified. Precipitate (A-fraction) was obtained by homogenizing zoospores with acetate buffer at pH 4.5 and centrifuging at 20,000 x g, and the A-fraction was suspended in borate buffer at pH 8.8, boiled for 1 hr and then centrifuged at 20,000 x g, giving the precipitate (B-fraction) and supernatant (C-fraction). Ten ml of 10 mM tris-HCl buffer containing 1 mM CaCl 2 at pH 7.4 was used to suspend A and B-fraction. The buffer was used as a control. A, B and C fractions obtained from 5 - 6 x 10 6 zoosprores reducted uptake of 3 H-leucine by the tuber disks of potato cv. Rishiri, but the inhibition rates caused by these fractions differed markedly. However, very high correlation was found between inhibition rates of 3 H-leucine uptake and sugar contents of these fractions. There was no difference in the inhibition rates between the zoosporial components of incompatible and compatible races, when the activities were expressed in terms of the sugar contents. The mycelial components of P. infestans extracted by the modified method of Lisker and Kuc which was used to extract phytoalexin elicitor from that of P. infestans, also had the same effect as the zoosporial components (A, B, and C-fraction) on 3 H-leucine uptake by the disks. C-fraction containing 15 μg of sugar per ml sufficed to inhibit 3 H-leucine uptake at the maximum rate, and the maximum rate of inhibition was attained within 2 hr after the zoosporial component (C-fraction containing 30 μg sugar/ml) was administered to the disks. (author)
Analysis of boundary layer flow over a porous nonlinearly stretching sheet with partial slip at
Directory of Open Access Journals (Sweden)
Swati Mukhopadhyay
2013-12-01
Full Text Available The boundary layer flow of a viscous incompressible fluid toward a porous nonlinearly stretching sheet is considered in this analysis. Velocity slip is considered instead of no-slip condition at the boundary. Similarity transformations are used to convert the partial differential equation corresponding to the momentum equation into nonlinear ordinary differential equation. Numerical solution of this equation is obtained by shooting method. It is found that the horizontal velocity decreases with increasing slip parameter.
Comparative Study of Various Normal Mode Analysis Techniques Based on Partial Hessians
GHYSELS, AN; VAN SPEYBROECK, VERONIQUE; PAUWELS, EWALD; CATAK, SARON; BROOKS, BERNARD R.; VAN NECK, DIMITRI; WAROQUIER, MICHEL
2010-01-01
Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and...
PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS
Energy Technology Data Exchange (ETDEWEB)
Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)
2016-02-20
In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.
PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS
International Nuclear Information System (INIS)
Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.
2016-01-01
In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information
Blasco, H; Błaszczyński, J; Billaut, J C; Nadal-Desbarats, L; Pradat, P F; Devos, D; Moreau, C; Andres, C R; Emond, P; Corcia, P; Słowiński, R
2015-02-01
Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides
O'Connor, B P
2000-08-01
Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.
Failure cause analysis and improvement for magnetic component cabinet
International Nuclear Information System (INIS)
Ge Bing
1999-01-01
The magnetic component cabinet is an important thermal control device fitted on the nuclear power. Because it used a self-saturation amplifier as a primary component, the magnetic component cabinet has some boundness. For increasing the operation safety on the nuclear power, the author describes a new scheme. In order that the magnetic component cabinet can be replaced, the new type component cabinet is developed. Integrate circuit will replace the magnetic components of every function parts. The author has analyzed overall failure cause for magnetic component cabinet and adopted some measures
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Multi-component controllers in reactor physics optimality analysis
International Nuclear Information System (INIS)
Aldemir, T.
1978-01-01
An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Autonomous learning in gesture recognition by using lobe component analysis
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Improvement of retinal blood vessel detection using morphological component analysis.
Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza
2015-03-01
Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Analysis of tangible and intangible hotel service quality components
Directory of Open Access Journals (Sweden)
Marić Dražen
2016-01-01
Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.
Analysis of European Union Economy in Terms of GDP Components
Directory of Open Access Journals (Sweden)
Simona VINEREAN
2013-12-01
Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.
Principal component analysis of 1/fα noise
International Nuclear Information System (INIS)
Gao, J.B.; Cao Yinhe; Lee, J.-M.
2003-01-01
Principal component analysis (PCA) is a popular data analysis method. One of the motivations for using PCA in practice is to reduce the dimension of the original data by projecting the raw data onto a few dominant eigenvectors with large variance (energy). Due to the ubiquity of 1/f α noise in science and engineering, in this Letter we study the prototypical stochastic model for 1/f α processes--the fractional Brownian motion (fBm) processes using PCA, and find that the eigenvalues from PCA of fBm processes follow a power-law, with the exponent being the key parameter defining the fBm processes. We also study random-walk-type processes constructed from DNA sequences, and find that the eigenvalue spectrum from PCA of those random-walk processes also follow power-law relations, with the exponent characterizing the correlation structures of the DNA sequence. In fact, it is observed that PCA can automatically remove linear trends induced by patchiness in the DNA sequence, hence, PCA has a similar capability to the detrended fluctuation analysis. Implications of the power-law distributed eigenvalue spectrum are discussed
Surface composition of biomedical components by ion beam analysis
International Nuclear Information System (INIS)
Kenny, M.J.; Wielunski, L.S.; Baxter, G.R.
1991-01-01
Materials used for replacement body parts must satisfy a number of requirements such as biocompatibility and mechanical ability to handle the task with regard to strength, wear and durability. When using a CVD coated carbon fibre reinforced carbon ball, the surface must be ion implanted with uniform dose of nitrogen ions in order to make it wear resistant. The mechanism by which the wear resistance is improved is one of radiation damage and the required dose of about 10 16 cm -2 can have a tolerance of about 20%. To implant a spherical surface requires manipulation of the sample within the beam and control system (either computer or manually operated) to enable uniform dose all the way from polar to equatorial regions on the surface. A manipulator has been designed and built for this purpose. In order to establish whether the dose is uniform, nuclear reaction analysis using the reaction 14 N(d,α) 12 C is an ideal method of profiling. By taking measurements at a number of points on the surface, the uniformity of nitrogen dose can be ascertained. It is concluded that both Rutherford Backscattering and Nuclear Reaction Analysis can be used for rapid analysis of surface composition of carbon based materials used for replacement body components. 2 refs., 2 figs
Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation
Directory of Open Access Journals (Sweden)
Deniz Erdogmus
2004-10-01
Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.
Preliminary study of soil permeability properties using principal component analysis
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.
Iris recognition based on robust principal component analysis
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Size distribution measurements and chemical analysis of aerosol components
Energy Technology Data Exchange (ETDEWEB)
Pakkanen, T.A.
1995-12-31
The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted
Variational Bayesian Learning for Wavelet Independent Component Analysis
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Partial wave analysis of the 18O(p,α0)15N reaction
International Nuclear Information System (INIS)
Wild, L.W.J.; Spicer, B.M.
1979-01-01
A partial wave analysis of the differential cross sections for the 18 O(p,α 0 ) 15 N reaction has been carried out applying the formalism of Blatt and Biedenharn (1952), made specific for this reaction. The differential cross sections, measured at 200 keV intervals from 6.6 to 10.4 MeV bombarding energy, were subjected to least-squares fitting to this specific analytic expression. Two resonances were given by the analysis, the 19 F states being at 14.71+-0.07 MeV (1/2 - ) and 14.80 + 0.07 MeV (1/2) +
Weingarten, Toby N; Del Mundo, Serena B; Yeoh, Tze Yeng; Scavonetto, Federica; Leibovich, Bradley C; Sprung, Juraj
2014-10-01
The aim of this retrospective study is to test the hypothesis that the use of spinal analgesia shortens the length of hospital stay after partial nephrectomy. We reviewed all patients undergoing partial nephrectomy for malignancy through flank incision between January 1, 2008, and June 30, 2011. We excluded patients who underwent tumor thrombectomy, used sustained-release opioids, or had general anesthesia supplemented by epidural analgesia. Patients were grouped into "spinal" (intrathecal opioid injection for postoperative analgesia) versus "general anesthetic" group, and "early" discharge group (within 3 postoperative days) versus "late" group. Association between demographics, patient physical status, anesthetic techniques, and surgical complexity and hospital stay were analyzed using multivariable logistic regression analysis. Of 380 patients, 158 (41.6%) were discharged "early" and 151 (39.7%) were "spinal" cases. Both spinal and early discharge groups had better postoperative pain control and used less postoperative systemic opioids. Spinal analgesia was associated with early hospital discharge, odds ratio 1.52, (95% confidence interval 1.00-2.30), P = 0.05, but in adjusted analysis was no longer associated with early discharge, 1.16 (0.73-1.86), P = 0.52. Early discharge was associated with calendar year, with more recent years being associated with early discharge. Spinal analgesia combined with general anesthesia was associated with improved postoperative pain control during the 1(st) postoperative day, but not with shorter hospital stay following partial nephrectomy. Therefore, unaccounted practice changes that occurred during more recent times affected hospital stay.
Ioele, Giuseppina; De Luca, Michele; Dinç, Erdal; Oliverio, Filomena; Ragno, Gaetano
2011-01-01
A chemometric approach based on the combined use of the principal component analysis (PCA) and artificial neural network (ANN) was developed for the multicomponent determination of caffeine (CAF), mepyramine (MEP), phenylpropanolamine (PPA) and pheniramine (PNA) in their pharmaceutical preparations without any chemical separation. The predictive ability of the ANN method was compared with the classical linear regression method Partial Least Squares 2 (PLS2). The UV spectral data between 220 and 300 nm of a training set of sixteen quaternary mixtures were processed by PCA to reduce the dimensions of input data and eliminate the noise coming from instrumentation. Several spectral ranges and different numbers of principal components (PCs) were tested to find the PCA-ANN and PLS2 models reaching the best determination results. A two layer ANN, using the first four PCs, was used with log-sigmoid transfer function in first hidden layer and linear transfer function in output layer. Standard error of prediction (SEP) was adopted to assess the predictive accuracy of the models when subjected to external validation. PCA-ANN showed better prediction ability in the determination of PPA and PNA in synthetic samples with added excipients and pharmaceutical formulations. Since both components are characterized by low absorptivity, the better performance of PCA-ANN was ascribed to the ability in considering all non-linear information from noise or interfering excipients.
An analysis of the nucleon spectrum from lattice partially-quenched QCD.
Energy Technology Data Exchange (ETDEWEB)
Armour, W.; Allton, C. R.; Leinweber, D. B.; Thomas, A. W.; Young, R. D.; Physics; Swansea Univ.; Univ. of Adelaide; Coll. of William and Mary
2010-09-01
The chiral extrapolation of the nucleon mass, M{sub n}, is investigated using data coming from 2-flavour partially-quenched lattice simulations. A large sample of lattice results from the CP-PACS Collaboration is analysed using the leading one-loop corrections, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite-range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of Mn in agreement with experiment. Furthermore, determinations of the low energy constants of the nucleon mass's chiral expansion are in agreement with previous methods, but with significantly reduced errors. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.
Partial wave analysis of anti pp → anti ΛΛ
International Nuclear Information System (INIS)
Bugg, D.V.
2004-01-01
A partial wave analysis of PS185 data for anti pp → anti ΛΛ is presented. A 3 S 1 cusp is identified in the inverse process anti ΛΛ→ anti p p at threshold, using detailed balance to deduce cross sections from anti pp → anti ΛΛ. Partial wave amplitudes for anti pp 3 P 0 , 3 F 3 , 3 D 3 and 3 G 3 exhibit a behaviour very similar to resonances observed in Crystal Barrel data. With this identification, the anti pp → anti ΛΛ data then provide evidence for a new I=0, J PC =1 - resonance with mass M = 2290 ±20 MeV, Γ= 275 ±35 MeV, coupling to both 3 S 1 and 3 D 1 . (orig.)
An analysis of the nucleon spectrum from lattice partially-quenched QCD
Energy Technology Data Exchange (ETDEWEB)
Armour, W. [Department of Physics, Swansea University, Swansea SA2 8PP, Wales (United Kingdom); Allton, C.R., E-mail: c.allton@swan.ac.u [Department of Physics, Swansea University, Swansea SA2 8PP, Wales (United Kingdom); Leinweber, D.B. [Special Research Centre for the Subatomic Structure of Matter (CSSM), School of Chemistry and Physics, University of Adelaide, 5005 (Australia); Thomas, A.W. [Jefferson Lab, 12000 Jefferson Ave., Newport News, VA 23606 (United States); College of William and Mary, Williamsburg, VA 23187 (United States); Young, R.D. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States)
2010-09-01
The chiral extrapolation of the nucleon mass, M{sub n}, is investigated using data coming from 2-flavour partially-quenched lattice simulations. A large sample of lattice results from the CP-PACS Collaboration is analysed using the leading one-loop corrections, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite-range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of M{sub n} in agreement with experiment. Furthermore, determinations of the low energy constants of the nucleon mass's chiral expansion are in agreement with previous methods, but with significantly reduced errors. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.
Patterns of Failure After MammoSite Brachytherapy Partial Breast Irradiation: A Detailed Analysis
International Nuclear Information System (INIS)
Chen, Sea; Dickler, Adam; Kirk, Michael; Shah, Anand; Jokich, Peter; Solmos, Gene; Strauss, Jonathan; Dowlatshahi, Kambiz; Nguyen, Cam; Griem, Katherine
2007-01-01
Purpose: To report the results of a detailed analysis of treatment failures after MammoSite breast brachytherapy for partial breast irradiation from our single-institution experience. Methods and Materials: Between October 14, 2002 and October 23, 2006, 78 patients with early-stage breast cancer were treated with breast-conserving surgery and accelerated partial breast irradiation using the MammoSite brachytherapy applicator. We identified five treatment failures in the 70 patients with >6 months' follow-up. Pathologic data, breast imaging, and radiation treatment plans were reviewed. For in-breast failures more than 2 cm away from the original surgical bed, the doses delivered to the areas of recurrence by partial breast irradiation were calculated. Results: At a median follow-up time of 26.1 months, five treatment failures were identified. There were three in-breast failures more than 2 cm away from the original surgical bed, one failure directly adjacent to the original surgical bed, and one failure in the axilla with synchronous distant metastases. The crude failure rate was 7.1% (5 of 70), and the crude local failure rate was 5.7% (4 of 70). Estimated progression-free survival at 48 months was 89.8% (standard error 4.5%). Conclusions: Our case series of 70 patients with >6 months' follow-up and a median follow-up of 26 months is the largest single-institution report to date with detailed failure analysis associated with MammoSite brachytherapy. Our failure data emphasize the importance of patient selection when offering partial breast irradiation
International Nuclear Information System (INIS)
Solomonson, L.P.; McCreery, M.J.; Kay, C.J.; Barber, M.J.
1987-01-01
Recently we demonstrated that target sizes for the partial activities of nitrate reductase were considerably smaller than the 100-kDa subunit which corresponded to the target size of the full (physiologic) activity NADH:nitrate reductase. These results suggested that the partial activities resided on functionally independent domains and that radiation inactivation may be due to localized rather than extensive damage to protein structure. The present study extends these observations and addresses several associated questions. Monophasic plots were observed over a wide range of radiation doses, suggesting a single activity component in each case. No apparent differences were observed over a 10-fold range of concentration for each substrate, suggesting that the observed slopes were not due to marked changes in Km values. Apparent target sizes estimated for partial activities associated with native enzyme and with limited proteolysis products of native enzyme suggested that the functional size obtained by radiation inactivation analysis is independent of the size of the polypeptide chain. The presence of free radical scavengers during irradiation reduced the apparent target size of both the physiologic and partial activities by an amount ranging from 24 to 43%, suggesting that a free radical mechanism is at least partially responsible for the inactivation. Immunoblot analysis of nitrate reductase irradiated in the presence of free radical scavengers revealed formation of distinct bands at 90, 75, and 40 kDa with increasing doses of irradiation rather than complete destruction of the polypeptide chain
Directory of Open Access Journals (Sweden)
Habiboallah Khajehsharifi
2017-05-01
Full Text Available Partial least squares (PLS1 and principal component regression (PCR are two multivariate calibration methods that allow simultaneous determination of several analytes in spite of their overlapping spectra. In this research, a spectrophotometric method using PLS1 is proposed for the simultaneous determination of ascorbic acid (AA, dopamine (DA and uric acid (UA. The linear concentration ranges for AA, DA and UA were 1.76–47.55, 0.57–22.76 and 1.68–28.58 (in μg mL−1, respectively. However, PLS1 and PCR were applied to design calibration set based on absorption spectra in the 250–320 nm range for 36 different mixtures of AA, DA and UA, in all cases, the PLS1 calibration method showed more quantitative prediction ability than PCR method. Cross validation method was used to select the optimum number of principal components (NPC. The NPC for AA, DA and UA was found to be 4 by PLS1 and 5, 12, 8 by PCR. Prediction error sum of squares (PRESS of AA, DA and UA were 1.2461, 1.1144, 2.3104 for PLS1 and 11.0563, 1.3819, 4.0956 for PCR, respectively. Satisfactory results were achieved for the simultaneous determination of AA, DA and UA in some real samples such as human urine, serum and pharmaceutical formulations.
A Principal Component Analysis of 39 Scientific Impact Measures
Bollen, Johan; Van de Sompel, Herbert
2009-01-01
Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078
Analysis of contaminants on electronic components by reflectance FTIR spectroscopy
International Nuclear Information System (INIS)
Griffith, G.W.
1982-09-01
The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included
Cnn Based Retinal Image Upscaling Using Zero Component Analysis
Nasonov, A.; Chesnakov, K.; Krylov, A.
2017-05-01
The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.
A principal component analysis of 39 scientific impact measures.
Directory of Open Access Journals (Sweden)
Johan Bollen
Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.
A survival analysis on critical components of nuclear power plants
International Nuclear Information System (INIS)
Durbec, V.; Pitner, P.; Riffard, T.
1995-06-01
Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs
Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.
Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald
2018-04-01
The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
Failure characteristic analysis of a component on standby state
International Nuclear Information System (INIS)
Shin, Sungmin; Kang, Hyungook
2013-01-01
Periodic operations for a specific type of component, however, can accelerate aging effects which increase component unavailability. For the other type of components, the aging effect caused by operation can be ignored. Therefore frequent operations can decrease component unavailability. Thus, to get optimum unavailability proper operation period and method should be studied considering the failure characteristics of each component. The information of component failure is given according to the main causes of failure depending on time flow. However, to get the optimal unavailability, proper interval of operation for inspection should be decided considering the time dependent and independent causes together. According to this study, gradually shorter operation interval for inspection is better to get the optimal component unavailability than that of specific period
Genetic analysis of partial resistance to basal stem rot (Sclerotinia sclerotiorum in sunflower
Directory of Open Access Journals (Sweden)
Amouzadeh Masoumeh
2013-01-01
Full Text Available Basal stem rot, caused by Sclerotinia sclerotiorum (Lib. de Bary, is one of the major diseases of sunflower (Helianthus annuus L. in the world. Quantitative trait loci (QTLs implicated in partial resistance to basal stem rot disease were identified using 99 recombinant inbred lines (RILs from the cross between sunflower parental lines PAC2 and RHA266. The study was undertaken in a completely randomized design with three replications under controlled conditions. The RILs and their parental lines were inoculated with a moderately aggressive isolate of S. sclerotiorum (SSKH41. Resistance to disease was evaluated by measuring the percentage of necrosis area three days after inoculation. QTLs were mapped using an updated high-density SSR and SNP linkage map. ANOVA showed significant differences among sunflower lines for resistance to basal stem rot (P≤0.05. The frequency distribution of lines for susceptibility to disease showed a continuous pattern. Composite interval mapping analysis revealed 5 QTLs for percentage of necrotic area, localized on linkage groups 1, 3, 8, 10 and 17. The sign of additive effect was positive in 5 QTLs, suggesting that the additive allele for partial resistance to basal stem rot came from the paternal line (RHA266. The phenotypic variance explained by QTLs (R2 ranged from 0.5 to 3.16%. Identified genes (HUCL02246_1, GST and POD, and SSR markers (ORS338, and SSL3 encompassing the QTLs for partial resistance to basal stem rot could be good candidates for marker assisted selection.
Nucleon-nucleon partial-wave analysis to 1100 MeV
International Nuclear Information System (INIS)
Arndt, R.A.; Hyslop, J.S. III; Roper, L.D.
1987-01-01
Comprehensive analyses of nucleon-nucleon elastic-scattering data below 1100 MeV laboratory kinetic energy are presented. The data base from which an energy-dependent solution and 22 single-energy solutions are obtained consists of 7223 pp and 5474 np data. A resonancelike structure is found to occur in the 1 D 2 , 3 F 3 , 3 P 2 - 3 F 2 , and 3 F 4 - 3 H 4 partial waves; this behavior is associated with poles in the complex energy plane. The pole positions and residues are obtained by analytic continuation of the ''production'' piece of the T matrix obtained in the energy-dependent solution. The new phases differ somewhat from previously published VPIandSU solutions, especially in I = 0 waves above 500 MeV, where np data are very sparse. The partial waves are, however, based upon a significantly larger data base and reflect correspondingly smaller errors. The full data base and solution files can be obtained through a computer scattering analysis interactive dial-in (SAID) system at VPIandSU, which also exists at many institutions around the world and which can be transferred to any site with a suitable computer system. The SAID system can be used to modify solutions, plan experiments, and obtain any of the multitude of predictions which derive from partial-wave analyses of the world data base
Izadi Najafabadi, Mohammad
2017-11-06
A relatively high level of stratification (qualitatively: lack of homogeneity) is one of the main advantages of partially premixed combustion over the homogeneous charge compression ignition concept. Stratification can smooth the heat release rate and improve the controllability of combustion. In order to compare stratification levels of different partially premixed combustion strategies or other combustion concepts, an objective and meaningful definition of “stratification level” is required. Such a definition is currently lacking; qualitative/quantitative definitions in the literature cannot properly distinguish various levels of stratification. The main purpose of this study is to objectively define combustion stratification (not to be confused with fuel stratification) based on high-speed OH* chemiluminescence imaging, which is assumed to provide spatial information regarding heat release. Stratification essentially being equivalent to spatial structure, we base our definition on two-dimensional Fourier transforms of photographs of OH* chemiluminescence. A light-duty optical diesel engine has been used to perform the OH* bandpass imaging on. Four experimental points are evaluated, with injection timings in the homogeneous regime as well as in the stratified partially premixed combustion regime. Two-dimensional Fourier transforms translate these chemiluminescence images into a range of spatial frequencies. The frequency information is used to define combustion stratification, using a novel normalization procedure. The results indicate that this new definition, based on Fourier analysis of OH* bandpass images, overcomes the drawbacks of previous definitions used in the literature and is a promising method to compare the level of combustion stratification between different experiments.
International Nuclear Information System (INIS)
Leli, D.A.; Katholi, C.R.; Hazelrig, J.B.; Falgout, J.C.; Hannay, H.J.; Wilson, E.M.; Wills, E.L.; Halsey, J.H. Jr.
1985-01-01
An initial assessment of the differential sensitivity of total versus partial curve analysis in estimating task related focal changes in cortical blood flow measured by the 133 Xe inhalation technique was accomplished by comparing the patterns during the performance of two sensorimotor tasks by normal subjects. The validity of these patterns was evaluated by comparing them to the activation patterns expected from activation studies with the intra-arterial technique and the patterns expected from neuropsychological research literature. Subjects were 10 young adult nonsmoking healthy male volunteers. They were administered two tasks having identical sensory and cognitive components but different response requirements (oral versus manual). The regional activation patterns produced by the tasks varied with the method of curve analysis. The activation produced by the two tasks was very similar to that predicted from the research literature only for total curve analysis. To the extent that the predictions are correct, these data suggest that the 133 Xe inhalation technique is more sensitive to regional flow changes when flow parameters are estimated from the total head curve. The utility of the total head curve analysis will be strengthened if similar sensitivity is demonstrated in future studies assessing normal subjects and patients with neurological and psychiatric disorders
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel
2014-01-01
Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...
On continuous ambiguities in model-independent partial wave analysis - 1
International Nuclear Information System (INIS)
Nikitin, I.N.
1995-01-01
A problem of amplitude reconstruction in terms of the given angular distribution is considered. Solution of this problem is not unique. A class of amplitudes, correspondent to one and the same angular distribution, forms a region in projection onto a finite set of spherical harmonics. An explicit parametrization of a boundary of the region is obtained. A shape of the region of ambiguities is studied in particular example. A scheme of partial-wave analysis, which describes all solutions in the limits of the region, is proposed. 5 refs., 5 figs
Thermodynamic Equilibria and Extrema Analysis of Attainability Regions and Partial Equilibria
Gorban, Alexander N; Kaganovich, Boris M; Keiko, Alexandre V; Shamansky, Vitaly A; Shirkalin, Igor A
2006-01-01
This book discusses mathematical models that are based on the concepts of classical equilibrium thermodynamics. They are intended for the analysis of possible results of diverse natural and production processes. Unlike the traditional models, these allow one to view the achievable set of partial equilibria with regards to constraints on kinetics, energy and mass exchange and to determine states of the studied systems of interest for the researcher. Application of the suggested models in chemical technology, energy and ecology is illustrated in the examples.
Major component analysis of dynamic networks of physiologic organ interactions
International Nuclear Information System (INIS)
Liu, Kang K L; Ma, Qianli D Y; Ivanov, Plamen Ch; Bartsch, Ronny P
2015-01-01
The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function. (paper)
Sensor Failure Detection of FASSIP System using Principal Component Analysis
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
A meta-analysis of executive components of working memory.
Nee, Derek Evan; Brown, Joshua W; Askren, Mary K; Berman, Marc G; Demiralp, Emre; Krawitz, Adam; Jonides, John
2013-02-01
Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM "work." However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.
Principal Component Analysis of Process Datasets with Missing Values
Directory of Open Access Journals (Sweden)
Kristen A. Severson
2017-07-01
Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.
Finite element elastic-plastic analysis of LMFBR components
International Nuclear Information System (INIS)
Levy, A.; Pifko, A.; Armen, H. Jr.
1978-01-01
The present effort involves the development of computationally efficient finite element methods for accurately predicting the isothermal elastic-plastic three-dimensional response of thick and thin shell structures subjected to mechanical and thermal loads. This work will be used as the basis for further development of analytical tools to be used to verify the structural integrity of liquid metal fast breeder reactor (LMFBR) components. The methods presented here have been implemented into the three-dimensional solid element module (HEX) of the Grumman PLANS finite element program. These methods include the use of optimal stress points as well as a variable number of stress points within an element. This allows monitoring the stress history at many points within an element and hence provides an accurate representation of the elastic-plastic boundary using a minimum number of degrees of freedom. Also included is an improved thermal stress analysis capability in which the temperature variation and corresponding thermal strain variation are represented by the same functional form as the displacement variation. Various problems are used to demonstrate these improved capabilities. (Auth.)
National Research Council Canada - National Science Library
Qi, Yuan
2000-01-01
In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...
Ma, W; Zhang, T-F; Lu, P; Lu, S H
2014-01-01
Breast cancer is categorized into two broad groups: estrogen receptor positive (ER+) and ER negative (ER-) groups. Previous study proposed that under trastuzumab-based neoadjuvant chemotherapy, tumor initiating cell (TIC) featured ER- tumors response better than ER+ tumors. Exploration of the molecular difference of these two groups may help developing new therapeutic strategies, especially for ER- patients. With gene expression profile from the Gene Expression Omnibus (GEO) database, we performed partial least squares (PLS) based analysis, which is more sensitive than common variance/regression analysis. We acquired 512 differentially expressed genes. Four pathways were found to be enriched with differentially expressed genes, involving immune system, metabolism and genetic information processing process. Network analysis identified five hub genes with degrees higher than 10, including APP, ESR1, SMAD3, HDAC2, and PRKAA1. Our findings provide new understanding for the molecular difference between TIC featured ER- and ER+ breast tumors with the hope offer supports for therapeutic studies.
Partial wave analysis of KKPI system in D and E/IOTA region
International Nuclear Information System (INIS)
Chung, S.U.; Fernow, R.; Kirk, H.
1985-01-01
A partial wave analysis and a Dalitz plot analysis of high-statistics data from reaction π - p → K + K/sub S/π - n at 8.0 GeV/c show that the D(1285) is a J/sup PG/ = 1 ++ state and the E(1420) a J/sup PG/ = 0 -+ state both with a substantial deltaπ decay mode. The 1 ++ K*anti K wave exhibits a rapid rise near threshold but no evidence of a resonance in the E region. The assignment of J/sup PG/ = O -+ to the E is confirmed from a Dalitz-plot analysis of the reaction pp → K + K/sub S/π - X 0 . 11 refs., 5 figs
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)
2016-09-15
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
International Nuclear Information System (INIS)
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Trimming of mammalian transcriptional networks using network component analysis
Directory of Open Access Journals (Sweden)
Liao James C
2010-10-01
Full Text Available Abstract Background Network Component Analysis (NCA has been used to deduce the activities of transcription factors (TFs from gene expression data and the TF-gene binding relationship. However, the TF-gene interaction varies in different environmental conditions and tissues, but such information is rarely available and cannot be predicted simply by motif analysis. Thus, it is beneficial to identify key TF-gene interactions under the experimental condition based on transcriptome data. Such information would be useful in identifying key regulatory pathways and gene markers of TFs in further studies. Results We developed an algorithm to trim network connectivity such that the important regulatory interactions between the TFs and the genes were retained and the regulatory signals were deduced. Theoretical studies demonstrated that the regulatory signals were accurately reconstructed even in the case where only three independent transcriptome datasets were available. At least 80% of the main target genes were correctly predicted in the extreme condition of high noise level and small number of datasets. Our algorithm was tested with transcriptome data taken from mice under rapamycin treatment. The initial network topology from the literature contains 70 TFs, 778 genes, and 1423 edges between the TFs and genes. Our method retained 1074 edges (i.e. 75% of the original edge number and identified 17 TFs as being significantly perturbed under the experimental condition. Twelve of these TFs are involved in MAPK signaling or myeloid leukemia pathways defined in the KEGG database, or are known to physically interact with each other. Additionally, four of these TFs, which are Hif1a, Cebpb, Nfkb1, and Atf1, are known targets of rapamycin. Furthermore, the trimmed network was able to predict Eno1 as an important target of Hif1a; this key interaction could not be detected without trimming the regulatory network. Conclusions The advantage of our new algorithm
Flood Frequency Analysis For Partial Duration Series In Ganjiang River Basin
zhangli, Sun; xiufang, Zhu; yaozhong, Pan
2016-04-01
Accurate estimation of flood frequency is key to effective, nationwide flood damage abatement programs. The partial duration series (PDS) method is widely used in hydrologic studies because it considers all events above a certain threshold level as compared to the annual maximum series (AMS) method, which considers only the annual maximum value. However, the PDS has a drawback in that it is difficult to define the thresholds and maintain an independent and identical distribution of the partial duration time series; this drawback is discussed in this paper. The Ganjiang River is the seventh largest tributary of the Yangtze River, the longest river in China. The Ganjiang River covers a drainage area of 81,258 km2 at the Wanzhou hydrologic station as the basin outlet. In this work, 56 years of daily flow data (1954-2009) from the Wanzhou station were used to analyze flood frequency, and the Pearson-III model was employed as the hydrologic probability distribution. Generally, three tasks were accomplished: (1) the threshold of PDS by percentile rank of daily runoff was obtained; (2) trend analysis of the flow series was conducted using PDS; and (3) flood frequency analysis was conducted for partial duration flow series. The results showed a slight upward trend of the annual runoff in the Ganjiang River basin. The maximum flow with a 0.01 exceedance probability (corresponding to a 100-year flood peak under stationary conditions) was 20,000 m3/s, while that with a 0.1 exceedance probability was 15,000 m3/s. These results will serve as a guide to hydrological engineering planning, design, and management for policymakers and decision makers associated with hydrology.
Directory of Open Access Journals (Sweden)
Marvin R Diaz
Full Text Available Cerebellar granule cells (CGNs are one of many neurons that express phasic and tonic GABAergic conductances. Although it is well established that Golgi cells (GoCs mediate phasic GABAergic currents in CGNs, their role in mediating tonic currents in CGNs (CGN-I(tonic is controversial. Earlier studies suggested that GoCs mediate a component of CGN-I(tonic that is present only in preparations from immature rodents. However, more recent studies have detected a GoC-dependent component of CGN-I(tonic in preparations of mature rodents. In addition, acute exposure to ethanol was shown to potentiate the GoC component of CGN-I(tonic and to induce a parallel increase in spontaneous inhibitory postsynaptic current frequency at CGNs. Here, we tested the hypothesis that these effects of ethanol on GABAergic transmission in CGNs are mediated by inhibition of the Na(+/K(+-ATPase. We used whole-cell patch-clamp electrophysiology techniques in cerebellar slices of male rats (postnatal day 23-30. Under these conditions, we reliably detected a GoC-dependent component of CGN-I(tonic that could be blocked with tetrodotoxin. Further analysis revealed a positive correlation between basal sIPSC frequency and the magnitude of the GoC-dependent component of CGN-I(tonic. Inhibition of the Na(+/K(+-ATPase with a submaximal concentration of ouabain partially mimicked the ethanol-induced potentiation of both phasic and tonic GABAergic currents in CGNs. Modeling studies suggest that selective inhibition of the Na(+/K(+-ATPase in GoCs can, in part, explain these effects of ethanol. These findings establish a novel mechanism of action of ethanol on GABAergic transmission in the central nervous system.
DNA damage focus analysis in blood samples of minipigs reveals acute partial body irradiation.
Directory of Open Access Journals (Sweden)
Andreas Lamkowski
Full Text Available Radiation accidents frequently involve acute high dose partial body irradiation leading to victims with radiation sickness and cutaneous radiation syndrome that implements radiation-induced cell death. Cells that are not lethally hit seek to repair ionizing radiation (IR induced damage, albeit at the expense of an increased risk of mutation and tumor formation due to misrepair of IR-induced DNA double strand breaks (DSBs. The response to DNA damage includes phosphorylation of histone H2AX in the vicinity of DSBs, creating foci in the nucleus whose enumeration can serve as a radiation biodosimeter. Here, we investigated γH2AX and DNA repair foci in peripheral blood lymphocytes of Göttingen minipigs that experienced acute partial body irradiation (PBI with 49 Gy (± 6% Co-60 γ-rays of the upper lumbar region. Blood samples taken 4, 24 and 168 hours post PBI were subjected to γ-H2AX, 53BP1 and MRE11 focus enumeration. Peripheral blood lymphocytes (PBL of 49 Gy partial body irradiated minipigs were found to display 1-8 DNA damage foci/cell. These PBL values significantly deceed the high foci numbers observed in keratinocyte nuclei of the directly γ-irradiated minipig skin regions, indicating a limited resident time of PBL in the exposed tissue volume. Nonetheless, PBL samples obtained 4 h post IR in average contained 2.2% of cells displaying a pan-γH2AX signal, suggesting that these received a higher IR dose. Moreover, dispersion analysis indicated partial body irradiation for all 13 minipigs at 4 h post IR. While dose reconstruction using γH2AX DNA repair foci in lymphocytes after in vivo PBI represents a challenge, the DNA damage focus assay may serve as a rapid, first line indicator of radiation exposure. The occurrence of PBLs with pan-γH2AX staining and of cells with relatively high foci numbers that skew a Poisson distribution may be taken as indicator of acute high dose partial body irradiation, particularly when samples are available
Analysis of Minor Component Segregation in Ternary Powder Mixtures
Directory of Open Access Journals (Sweden)
Asachi Maryam
2017-01-01
Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.
Kovács, Endre R; Benko, Mária
2009-03-01
Partial genome characterisation of a novel adenovirus, found recently in organ samples of multiple species of dead birds of prey, was carried out by sequence analysis of PCR-amplified DNA fragments. The virus, named as raptor adenovirus 1 (RAdV-1), has originally been detected by a nested PCR method with consensus primers targeting the adenoviral DNA polymerase gene. Phylogenetic analysis with the deduced amino acid sequence of the small PCR product has implied a new siadenovirus type present in the samples. Since virus isolation attempts remained unsuccessful, further characterisation of this putative novel siadenovirus was carried out with the use of PCR on the infected organ samples. The DNA sequence of the central genome part of RAdV-1, encompassing nine full (pTP, 52K, pIIIa, III, pVII, pX, pVI, hexon, protease) and two partial (DNA polymerase and DBP) genes and exceeding 12 kb pairs in size, was determined. Phylogenetic tree reconstructions, based on several genes, unambiguously confirmed the preliminary classification of RAdV-1 as a new species within the genus Siadenovirus. Further study of RAdV-1 is of interest since it represents a rare adenovirus genus of yet undetermined host origin.
International Nuclear Information System (INIS)
Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.
2006-01-01
Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study
Through-flow analysis of steam turbines operating under partial admission
International Nuclear Information System (INIS)
Delabriere, H.; Werthe, J.M.
1993-05-01
In order to produce electric energy with improved efficiency, Electricite de France has to check the performances of equipment proposed by manufacturers. In the specific field of steam turbines, one of the main tools of analysis is the quasi 3D through flow computer code CAPTUR, which enables the calculation of all the aerothermodynamic parameters in a steam turbine. The last development that has been performed on CAPTUR is the extension to a calculation of a flow within a turbine operating under partial admission. For such turbines, it is now possible to calculate an internal flow field, and determine the efficiency, in a much more accurate way than with previous methods, which consist in an arbitrary efficiency correction on an averaged 1D flow calculation. From the aerodynamic point of view, partial admission involves specific losses in the first stage, then expansion and turbulent mixing just downstream of the first stage. Losses in the first stage are of very different types: windage, pumping and expansion at the ends of an admission sector. Their values have been estimated, with help of experimental results, and then expressed as a slow down coefficient applied to the relative velocity at the blade outlet. As for the flow downstream the first stage, a computational analysis has been made with specific 2D and 3D codes. It has led to define the numerical treatment established in the CAPTUR code. Some problems had to be solved to make compatible a quasi 3D formulation, making an average in the azimutal direction and using a streamline curvature method, with an absolute 3D phenomenon. Certain limitations of the working conditions were first adopted, but a generalization is on hand. The calculation of a nuclear HP steam turbine operating under partial admission has been performed. Calculation results are in good accordance with tests results, especially as regards the expansion line along the stages. The code CAPTUR will be particularly useful for the calculation
Organizational Design Analysis of Fleet Readiness Center Southwest Components Department
National Research Council Canada - National Science Library
Montes, Jose F
2007-01-01
.... The purpose of this MBA Project is to analyze the proposed organizational design elements of the FRCSW Components Department that resulted from the integration of the Naval Aviation Depot at North Island (NADEP N.I...
Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.
1999-08-01
Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.
Analysis of pumping tests: Significance of well diameter, partial penetration, and noise
Heidari, M.; Ghiassi, K.; Mehnert, E.
1999-01-01
The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in
Lee, Won Chan; Hoffmann, Marc S; Arcona, Steve; D'Souza, Joseph; Wang, Qin; Pashos, Chris L
2005-10-01
department visit during the period after the failure of initial monotherapy compared with the OXC monotherapy cohort (odds ratio = 1.52; P < 0.05). Despite limitations, the results of retrospective analysis of claims data suggest that the care of patients with treatment-refractory partial seizure disorder is costly and may vary significantly based on the pattern of care.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Dual stacked partial least squares for analysis of near-infrared spectra
Energy Technology Data Exchange (ETDEWEB)
Bi, Yiming [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Xie, Qiong, E-mail: yimbi@163.com [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Zhao, Yuhui [School of Economics and Business, Northeastern University at Qinhuangdao, 066000 Qinhuangdao City (China); Li, Changwen [Food Research Institute of Tianjin Tasly Group, 300410 Tianjin (China)
2013-08-20
Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.
Dual stacked partial least squares for analysis of near-infrared spectra
International Nuclear Information System (INIS)
Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen
2013-01-01
Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications
Thermodynamic analysis of a coal-based polygeneration system with partial gasification
International Nuclear Information System (INIS)
Li, Yuanyuan; Zhang, Guoqiang; Yang, Yongping; Zhai, Dailong; Zhang, Kai; Xu, Gang
2014-01-01
This study proposed a polygeneration system based on coal partial gasification, in which methanol and power were generated. This proposed system, comprising chemical and power islands, was designed and its characteristics are analyzed. The commercial software Aspen Plus was used to perform the system analysis. In the case study, the energy and exergy efficiency values of the proposed polygeneration system were 51.16% and 50.58%, which are 2.34% and 2.10%, respectively, higher than that of the reference system. Energy-Utilization Diagram analysis showed that removing composition adjustment and recycling 72.7% of the unreacted gas could reduce the exergy destruction during methanol synthesis by 46.85% and that the char utilized to preheat the compressed air could reduce the exergy destruction during combustion by 10.28%. Sensitivity analysis was also performed. At the same capacity ratio, the energy and exergy efficiency values of the proposed system were 1.30%–2.48% and 1.21%–2.30% higher than that of the reference system, respectively. The range of chemical-to-power capacity ratio in the proposed system was 0.41–1.40, which was narrower than that in the reference system. But the range of 1.04–1.4 was not recommended for the disappearance of energy saving potential in methanol synthesis. - Highlights: • A novel polygeneration system based on coal partial gasification is proposed. • The efficient conversion method for methanol and power is explored. • The exergy destruction in chemical energy conversion processes is decreased. • Thermodynamic performance and system characteristics are analyzed
Pregabalin versus gabapentin in partial epilepsy: a meta-analysis of dose-response relationships
Directory of Open Access Journals (Sweden)
Thompson Sally
2010-11-01
Full Text Available Abstract Background To compare the efficacy of pregabalin and gabapentin at comparable effective dose levels in patients with refractory partial epilepsy. Methods Eight randomized placebo controlled trials investigating the efficacy of pregabalin (4 studies and gabapentin (4 studies over 12 weeks were identified with a systematic literature search. The endpoints of interest were "responder rate" (where response was defined as at least a 50% reduction from baseline in the number of seizures and "change from baseline in seizure-free days over the last 28 days (SFD". Results of all trials were analyzed using an indirect comparison approach with placebo as the common comparator. The base-case analysis used the intention-to-treat last observation carried forward method. Two sensitivity analyses were conducted among completer and responder populations. Results The base-case analysis revealed statistically significant differences in response rate in favor of pregabalin 300 mg versus gabapentin 1200 mg (odds ratio, 1.82; 95% confidence interval, 1.02, 3.25 and pregabalin 600 mg versus gabapentin 1800 mg (odds ratio, 2.52; 95% confidence interval, 1.21, 5.27. Both sensitivity analyses supported the findings of the base-case analysis, although statistical significance was not demonstrated. All dose levels of pregabalin (150 mg to 600 mg were more efficacious than corresponding dosages of gabapentin (900 mg to 2400 mg in terms of SFD over the last 28 days. Conclusion In patients with refractory partial epilepsy, pregabalin is likely to be more effective than gabapentin at comparable effective doses, based on clinical response and the number of SFD.
Analysis of the frequency components of X-ray images
International Nuclear Information System (INIS)
Matsuo, Satoru; Komizu, Mitsuru; Kida, Tetsuo; Noma, Kazuo; Hashimoto, Keiji; Onishi, Hideo; Masuda, Kazutaka
1997-01-01
We examined the relation between the frequency components of x-ray images of the chest and phalanges and their read sizes for digitizing. Images of the chest and phalanges were radiographed using three types of screens and films, and the noise images in background density were digitized with a drum scanner, changing the read sizes. The frequency components for these images were evaluated by converting them to the secondary Fourier to obtain the power spectrum and signal to noise ratio (SNR). After changing the cut-off frequency on the power spectrum to process a low pass filter, we also examined the frequency components of the images in relation to the normalized mean square error (NMSE) for the image converted to reverse Fourier and the original image. Results showed that the frequency components were 2.0 cycles/mm for the chest image and 6.0 cycles/mm for the phalanges. Therefore, it is necessary to collect data applying the read sizes of 200 μm and 50 μm for the chest and phalangeal images, respectively, in order to digitize these images without loss of their frequency components. (author)
Reactor modeling and process analysis for partial oxidation of natural gas
Albrecht, B.A.
2004-01-01
This thesis analyses a novel process of partial oxidation of natural gas and develops a numerical tool for the partial oxidation reactor modeling. The proposed process generates syngas in an integrated plant of a partial oxidation reactor, a syngas turbine and an air separation unit. This is called
Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools
Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A
2001-01-01
The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...
Directory of Open Access Journals (Sweden)
Stefania Salvatore
2016-07-01
Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
Independent component analysis of edge information for face recognition
Karande, Kailash Jagannath
2013-01-01
The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos
Eliminating the Influence of Harmonic Components in Operational Modal Analysis
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2007-01-01
structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. These forces are seen as harmonic components in the responses, and their influence should be eliminated before extracting the modes in their vicinity. This paper describes a new method based...... on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic...
Value Added Tax and price stability in Nigeria: A partial equilibrium analysis
Directory of Open Access Journals (Sweden)
Marius Ikpe
2013-12-01
Full Text Available The economic impact of Value Added Tax (VAT that was implemented in Nigeria in 1994 has generated much debate in recent times, especially with respect to its effect on the level of aggregate prices. This study empirically examines the influence of VAT on price stability in Nigeria using partial equilibrium analysis. We introduced the VAT variable in the framework of a combination of structuralist, monetarist and fiscalist approaches to inflation modelling. The analysis was carried out by applying multiple regression analysis in static form to data for the 1994-2010 period. The results reveal that VAT exerts a strong upward pressure on price levels, most likely due to the burden of VAT on intermediate outputs. The study rules out the option of VAT exemptions for intermediate outputs as a solution, due to the difficulty in distinguishing between intermediate and final outputs. Instead, it recommends a detailed post-VAT cost-benefit analysis to assess the social desirability of VAT policy in Nigeria.
Regional frequency analysis of extreme rainfalls using partial L moments method
Zakaria, Zahrahtul Amani; Shabri, Ani
2013-07-01
An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.
Abstract interfaces for data analysis - component architecture for data analysis tools
International Nuclear Information System (INIS)
Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.
2001-01-01
The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA
PyPWA: A partial-wave/amplitude analysis software framework
Salgado, Carlos
2016-05-01
The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.
Structural studies of formic acid using partial form-factor analysis
International Nuclear Information System (INIS)
Swan, G.; Dore, J.C.; Bellissent-Funel, M.C.
1993-01-01
Neutron diffraction measurements have been made of liquid formic acid using H/D isotopic substitution. Data are recorded for samples of DCOOD, HCOOD and a (H/D)COOD mixture (α D =0.36). A first-order difference method is used to determine the intra-molecular contribution through the introduction of a partial form-factor analysis technique incorporating a hydrogen-bond term. The method improves the sensitivity of the parameters defining the molecular geometry and avoids some of the ambiguities arising from terms involving spatial overlap of inter- and intra-molecular features. The possible application to other systems is briefly reviewed. (authors). 8 figs., 2 tabs., 8 refs
Stress analysis of partial sphere used for bottom shell of off-shore structure
International Nuclear Information System (INIS)
Nishimaki, Ko; Matsumoto, Kohei; Hori, Tohru; Takeshita, Haruyuki; Iwata, Setsuo
1976-01-01
In the near future, various huge off-shore structures will be constructed. Concrete shall become a leading material in the structures, owing to its versatile properties. One of the limitations of concrete is its low tensile strength. The problem of low tensile strength of concrete is dealt with in main by two different methods: by applying prestressing and by designing the structural configuration so that no tensile stresses appear. In the paper, the authors discuss the application of partially spherical shell to huge off-shore structures. Structural analysis by using the finite element method were done in order to investigate the feasibility of the structure. The results were arranged as to certain parameters to derive design charts by which the stresses of check points can be presumed. Optimum shape is also discussed. (auth.)
Fit Analysis of Different Framework Fabrication Techniques for Implant-Supported Partial Prostheses.
Spazzin, Aloísio Oro; Bacchi, Atais; Trevisani, Alexandre; Farina, Ana Paula; Dos Santos, Mateus Bertolini
2016-01-01
This study evaluated the vertical misfit of implant-supported frameworks made using different techniques to obtain passive fit. Thirty three-unit fixed partial dentures were fabricated in cobalt-chromium alloy (n = 10) using three fabrication methods: one-piece casting, framework cemented on prepared abutments, and laser welding. The vertical misfit between the frameworks and the abutments was evaluated with an optical microscope using the single-screw test. Data were analyzed using one-way analysis of variance and Tukey test (α = .05). The one-piece casted frameworks presented significantly higher vertical misfit values than those found for framework cemented on prepared abutments and laser welding techniques (P Laser welding and framework cemented on prepared abutments are effective techniques to improve the adaptation of three-unit implant-supported prostheses. These techniques presented similar fit.
Alfriend, K. T.
1973-01-01
A ring partially filled with a viscous fluid has been analyzed as a nutation damper for a spinning satellite. The fluid has been modelled as a rigid slug of finite length moving in a tube and resisted by a linear viscous force. It is shown that there are two distinct modes of motion, called the spin synchronous mode and the nutation synchronous mode. Time constants for each mode are obtained for both the symmetric and asymmetric satellite. The effects of a stop in the tube and an offset of the ring from the spin axis are also investigated. An analysis of test results is also given including a determination of the effect of gravity on the time constants in the two modes.
Analysis of the flamelet concept in the numerical simulation of laminar partially premixed flames
Energy Technology Data Exchange (ETDEWEB)
Consul, R.; Oliva, A.; Perez-Segarra, C.D.; Carbonell, D. [Centre Tecnologic de Transferencia de Calor (CTTC), Universitat Politecnica de Catalunya (UPC), Colom 11, E-08222, Terrassa, Barcelona (Spain); de Goey, L.P.H. [Eindhoven University of Technology, Department of Mechanical Engineering, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2008-04-15
The aim of this work is to analyze the application of flamelet models based on the mixture fraction variable and its dissipation rate to the numerical simulation of partially premixed flames. Although the main application of these models is the computation of turbulent flames, this work focuses on the performance of flamelet concept in laminar flame simulations removing, in this way, turbulence closure interactions. A well-known coflow methane/air laminar flame is selected. Five levels of premixing are taken into account from an equivalence ratio {phi}={infinity} (nonpremixed) to {phi}=2.464. Results obtained using the flamelet approaches are compared to data obtained from the detailed solution of the complete transport equations using primitive variables. Numerical simulations of a counterflow flame are also presented to support the discussion of the results. Special emphasis is given to the analysis of the scalar dissipation rate modeling. (author)
International Nuclear Information System (INIS)
Shah, Anand P.; Dickler, Adam; Kirk, Michael C.; Chen, Sea S.; Strauss, Jonathan B.; Coon, Alan B.; Turian, Julius V.; Siziopikou, Kalliopi; Dowlat, Kambiz; Griem, Katherine L.
2008-01-01
Partial breast irradiation (PBI) was designed in part to decrease overall treatment times associated with whole breast radiation therapy (WBRT). WBRT treats the entire breast and usually portions of the axilla. The goal of PBI is to treat a smaller volume of breast tissue in less time, focusing the dose around the lumpectomy cavity. The following is a case of a 64-year-old woman with early-stage breast cancer treated with PBI who failed regionally in the ipsilateral axilla. With our dosimetric analysis, we found that the entire area of this axillary failure would have likely received at least 45 Gy if WBRT had been used, enough to sterilize microscopic disease. With PBI, this area received a mean dose of only 2.8 Gy, which raises the possibility that this regional failure may have been prevented had WBRT been used instead of PBI
Xu, Yue; Wu, Yining; Deng, Shimin; Wei, Shirang
2004-02-01
The partial coal gasification air pre-heating coal-fired combined cycle (PGACC) is a cleaning coal power system, which integrates the coal gasification technology, circulating fluidized bed technology, and combined cycle technology. It has high efficiency and simple construction, and is a new selection of the cleaning coal power systems. A thermodynamic analysis of the PGACC is carried out. The effects of coal gasifying rate, pre-heating air temperature, and coal gas temperature on the performances of the power system are studied. In order to repower the power plant rated 100 MW by using the PGACC, a conceptual design is suggested. The computational results show that the PGACC is feasible for modernizing the old steam power plants and building the new cleaning power plants.
Design and analysis of automobile components using industrial procedures
Kedar, B.; Ashok, B.; Rastogi, Nisha; Shetty, Siddhanth
2017-11-01
Today’s automobiles depend upon mechanical systems that are crucial for aiding in the movement and safety features of the vehicle. Various safety systems such as Antilock Braking System (ABS) and passenger restraint systems have been developed to ensure that in the event of a collision be it head on or any other type, the safety of the passenger is ensured. On the other side, manufacturers also want their customers to have a good experience while driving and thus aim to improve the handling and the drivability of the vehicle. Electronics systems such as Cruise Control and active suspension systems are designed to ensure passenger comfort. Finally, to ensure optimum and safe driving the various components of a vehicle must be manufactured using the latest state of the art processes and must be tested and inspected with utmost care so that any defective component can be prevented from being sent out right at the beginning of the supply chain. Therefore, processes which can improve the lifetime of their respective components are in high demand and much research and development is done on these processes. With a solid base research conducted, these processes can be used in a much more versatile manner for different components, made up of different materials and under different input conditions. This will help increase the profitability of the process and also upgrade its value to the industry.
Analysis of soft rock mineral components and roadway failure mechanism
Institute of Scientific and Technical Information of China (English)
陈杰
2001-01-01
The mineral components and microstructure of soft rock sampled from roadway floor inXiagou pit are determined by X-ray diffraction and scanning electron microscope. Ccmbined withthe test of expansion and water softening property of the soft rock, the roadway failure mechanism is analyzed, and the reasonable repair supporting principle of roadway is put forward.
Analysis Of The Executive Components Of The Farmer Field School ...
African Journals Online (AJOL)
The purpose of this study was to investigate the executive components of the Farmer Field School (FFS) project in Uromieh county of West Azerbaijan Province, Iran. All the members and non-members (as control group) of FFS pilots in Uromieh county (N= 98) were included in the study. Data were collected by use of ...
Principal Components Analysis of Job Burnout and Coping ...
African Journals Online (AJOL)
The key component structure of job burnout were feelings of disgust, insomnia, headaches, weight loss or gain feeling of omniscient, pain of unexplained origin, hopelessness, agitation and workaholics, while the factor structure of coping strategies were development of self realistic picture, retaining hope, asking for help ...
Phenolic components, antioxidant activity, and mineral analysis of ...
African Journals Online (AJOL)
In addition to being consumed as food, caper (Capparis spinosa L.) fruits are also used in folk medicine to treat inflammatory disorders, such as rheumatism. C. spinosa L. is rich in phenolic compounds, making it increasingly popular because of its components' potential benefits to human health. We analyzed a number of ...
3-D fracture analysis using a partial-reduced integration scheme
International Nuclear Information System (INIS)
Leitch, B.W.
1987-01-01
This paper presents details of 3-D elastic-plastic analyses of axially orientated external surface flaw in an internally pressurized thin-walled cylinder and discusses the variation of the J-integral values around the crack tip. A partial-reduced-integration-penalty method is introduced to minimize this variation of the J-integral near the crack tip. Utilizing 3-D symmetry, an eighth segment of a tube containing an elliptically shaped external surface flaw is modelled using 20-noded isoparametric elements. The crack-tip elements are collapsed to form a 1/r stress singularity about the curved crack front. The finite element model is subjected to internal pressure and axial pressure-generated loads. The virtual crack extension method is used to determine linear elastic stress intensity factors from the J-integral results at various points around the crack front. Despite the different material constants and the thinner wall thickness in this analysis, the elastic results compare favourably with those obtained by other researchers. The nonlinear stress-strain behaviour of the tube material is modelled using an incremental theory of plasticity. Variations of the J-integral values around the curved crack front of the 3-D flaw were seen. These variations could not be resolved by neglecting the immediate crack-tip elements J-integral results in favour of the more remote contour paths or else smoothed out when all the path results are averaged. Numerical incompatabilities in the 20-noded 3-D finite elements used to model the surface flaw were found. A partial-reduced integration scheme, using a combination of full and reduced integration elements, is proposed to determine J-integral results for 3-D fracture analyses. This procedure is applied to the analysis of an external semicircular surface flaw projecting halfway into the tube wall thickness. Examples of the J-integral values, before and after the partial-reduced integration method is employed, are given around the
Analysis of water hammer in two-component two-phase flows
International Nuclear Information System (INIS)
Warde, H.; Marzouk, E.; Ibrahim, S.
1989-01-01
The water hammer phenomena caused by a sudden valve closure in air-water two-phase flows must be clarified for the safety analysis of LOCA in reactors and further for the safety of boilers, chemical plants, pipe transport of fluids such as petroleum and natural gas. In the present work water hammer phenomena caused by sudden valve closure in two-component two-phase flows are investigated theoretically and experimentally. The phenomena are more complicated than in single phase-flows due to the fact of the presence of compressible component. Basic partial differential equations based on a one-dimensional homogeneous flow model are solved by the method of characteristic. The analysis is extended to include friction in a two-phase mixture depending on the local flow pattern. The profiles of the pressure transients, the propagation velocity of pressure waves and the effect of valve closure on the transient pressure are found. Different two-phase flow pattern and frictional pressure drop correlations were used including Baker, Chesholm and Beggs and Bril correlations. The effect of the flow pattern on the characteristic of wave propagation is discussed primarily to indicate the effect of void fraction on the velocity of wave propagation and on the attenuation of pressure waves. Transient pressure in the mixture were recorded at different air void fractions, rates of uniform valve closure and liquid flow velocities with the aid of pressure transducers, transient wave form recorders interfaced with an on-line pc computer. The results are compared with computation, and good agreement was obtained within experimental accuracy
Energy Technology Data Exchange (ETDEWEB)
Newson, E; Mizsey, P; Hottinger, P; Truong, T B; Roth, F von; Schucan, Th H [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
Methanol partial oxidation (pox) to produce hydrogen for mobile fuel cell applications has proved initially more successful than hydrocarbon pox. Recent results of catalyst screening and kinetic studies with methanol show that hydrogen production rates have reached 7000 litres/hour/(litre reactor volume) for the dry pox route and 12,000 litres/hour/(litre reactor volume) for wet pox. These rates are equivalent to 21 and 35 kW{sub th}/(litre reactor volume) respectively. The reaction engineering problems remain to be solved for dry pox due to the significant exotherm of the reaction (hot spots of 100-200{sup o}C), but wet pox is essentially isothermal in operation. Analyses of the integrated fuel processor - fuel cell systems show that two routes are available to satisfy the sensitivity of the fuel cell catalysts to carbon monoxide, i.e. a preferential oxidation reactor or a membrane separator. Targets for individual system components are evaluated for the base and best case systems for both routes to reach the combined 40% efficiency required for the integrated fuel processor - fuel cell system. (author) 2 figs., 1 tab., 3 refs.
Non-conformal contact mechanical characteristic analysis on spherical components
Energy Technology Data Exchange (ETDEWEB)
Zhen-zhi, G.; Bin, H.; Zheng-ming, G.; Feng-mei, Y.; Jin, Q [The 2. Artillery Engineering Univ., Xi' an (China)
2017-03-15
Non-conformal spherical-contact mechanical problems is a three-dimensional coordination or similar to the coordination spherical contact. Due to the complexity of the problem of spherical-contact and difficulties of solving higher-order partial differential equations, problems of three-dimensional coordination or similar to the coordination spherical-contact is still no exact analytical method for solving. It is based on three-dimensional taper model is proposed a model based on the contour surface of the spherical contact and concluded of the formula of the contact pressure and constructed of finite element model by contact pressure distribution under the non-conformal spherical. The results shows spherical contact model can reflect non-conformal spherical-contacting mechanical problems more than taper-contacting model, and apply for the actual project.
Analysis and test of insulated components for rotary engine
Badgley, Patrick R.; Doup, Douglas; Kamo, Roy
1989-01-01
The direct-injection stratified-charge (DISC) rotary engine, while attractive for aviation applications due to its light weight, multifuel capability, and potentially low fuel consumption, has until now required a bulky and heavy liquid-cooling system. NASA-Lewis has undertaken the development of a cooling system-obviating, thermodynamically superior adiabatic rotary engine employing state-of-the-art thermal barrier coatings to thermally insulate engine components. The thermal barrier coating material for the cast aluminum, stainless steel, and ductile cast iron components was plasma-sprayed zirconia. DISC engine tests indicate effective thermal barrier-based heat loss reduction, but call for superior coefficient-of-thermal-expansion matching of materials and better tribological properties in the coatings used.
COMPONENTS OF THE UNEMPLOYMENT ANALYSIS IN CONTEMPORARY ECONOMIES
Directory of Open Access Journals (Sweden)
Ion Enea-SMARANDACHE
2010-03-01
Full Text Available The unemployment is a permanent phenomenon in majority countries of the world, either with advanced economies, either in course of developed economies, and the implications and the consequences are more complexes, so that, practically, the fight with unemployment becomes a fundamental objective for the economy politics. In context, the authors proposed to set apart essentially components for unemployment analyse with the scope of identification the measures and the instruments of counteracted.
Analysis of Femtosecond Timing Noise and Stability in Microwave Components
International Nuclear Information System (INIS)
2011-01-01
To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/ o C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.
Analysis of the Components of Economic Potential of Agricultural Enterprises
Vyacheslav Skobara; Volodymyr Podkopaev
2014-01-01
Problems of efficiency of enterprises are increasingly associated with the use of the economic potential of the company. This article addresses the structural components of the economic potential of agricultural enterprise, development and substantiation of the model of economic potential with due account of the peculiarities of agricultural production. Based on the study of various approaches to the potential structure established is the definition of of production, labour, financial and man...
International Nuclear Information System (INIS)
Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin
2014-01-01
In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Directory of Open Access Journals (Sweden)
Shuai Yao
Full Text Available Chinese patent medicines (CPM, generally prepared from several traditional Chinese medicines (TCMs in accordance with specific process, are the typical delivery form of TCMs in Asia. To date, quality control of CPMs has typically focused on the evaluation of the final products using fingerprint technique and multi-components quantification, but rarely on monitoring the whole preparation process, which was considered to be more important to ensure the quality of CPMs. In this study, a novel and effective strategy labeling "retracing" way based on HPLC fingerprint and chemometric analysis was proposed with Shenkang injection (SKI serving as an example to achieve the quality control of the whole preparation process. The chemical fingerprints were established initially and then analyzed by similarity, principal component analysis (PCA and partial least squares-discriminant analysis (PLS-DA to evaluate the quality and to explore discriminatory components. As a result, the holistic inconsistencies of ninety-three batches of SKIs were identified and five discriminatory components including emodic acid, gallic acid, caffeic acid, chrysophanol-O-glucoside, and p-coumaroyl-O-galloyl-glucose were labeled as the representative targets to explain the retracing strategy. Through analysis of the targets variation in the corresponding semi-products (ninety-three batches, intermediates (thirty-three batches, and the raw materials, successively, the origins of the discriminatory components were determined and some crucial influencing factors were proposed including the raw materials, the coextraction temperature, the sterilizing conditions, and so on. Meanwhile, a reference fingerprint was established and subsequently applied to the guidance of manufacturing. It was suggested that the production process should be standardized by taking the concentration of the discriminatory components as the diagnostic marker to ensure the stable and consistent quality for multi
Directory of Open Access Journals (Sweden)
Renata Dornelles Morgental
2012-04-01
Full Text Available OBJECTIVE: The presence of periapical radiolucency has been used as a criterion for endodontic treatment failure. However, in addition to the inherent limitations of radiographic examinations, radiographic interpretations are extremely subjective. Thus, this study investigated the effect of partial analysis of root filling quality and periapical status on retreatment decisions by general dentists. MATERIAL AND METHODS: Twelve digitalized periapical radiographs were analyzed by 10 observers. The study was conducted at three time points at 1-week intervals. Radiographs edited with the Adobe Photoshop CS4 software were analyzed at three time points: first, only root filling quality was analyzed; second, only the periapical areas of the teeth under study were visualized; finally, observers analyzed the unedited radiographic image. Spearman ’s coefficient was used to analyze the correlations between the scores assigned when the periapical area was not visible and when the unedited radiograph was analyzed, as well as between the scores assigned when root fillings where not visible and when the unedited radiograph was analyzed. Sensitivity, specificity, positive and negative predictive values between partial images and unedited radiographs were also used to analyze retreatment decisions. The level of significance was set at 5%. RESULTS: The visualization of the root filling on the unedited radiograph affected the interpretation of the periapical status and the technical quality of the fillings has a greater influence on the general dentist’s decision to prescribe endodontic retreatment than the periapical condition. CONCLUSION: In order to make endodontic diagnosis, radiographic interpretation process should not only emphasize technical aspects, but also consider biological factors.
Probabilistic structural analysis of aerospace components using NESSUS
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
Compressive Online Robust Principal Component Analysis with Multiple Prior Information
DEFF Research Database (Denmark)
Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen
-rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...
Seismic fragility analysis of structural components for HFBR facilities
International Nuclear Information System (INIS)
Park, Y.J.; Hofmayer, C.H.
1992-01-01
The paper presents a summary of recently completed seismic fragility analyses of the HFBR facilities. Based on a detailed review of past PRA studies, various refinements were made regarding the strength and ductility evaluation of structural components. Available laboratory test data were analysed to evaluate the formulations used to predict the ultimate strength and deformation capacities of steel, reinforced concrete and masonry structures. The biasness and uncertainties were evaluated within the framework of the fragility evaluation methods widely accepted in the nuclear industry. A few examples of fragility calculations are also included to illustrate the use of the presented formulations
The ethical component of professional competence in nursing: an analysis.
Paganini, Maria Cristina; Yoshikawa Egry, Emiko
2011-07-01
The purpose of this article is to initiate a philosophical discussion about the ethical component of professional competence in nursing from the perspective of Brazilian nurses. Specifically, this article discusses professional competence in nursing practice in the Brazilian health context, based on two different conceptual frameworks. The first framework is derived from the idealistic and traditional approach while the second views professional competence through the lens of historical and dialectical materialism theory. The philosophical analyses show that the idealistic view of professional competence differs greatly from practice. Combining nursing professional competence with philosophical perspectives becomes a challenge when ideals are opposed by the reality and implications of everyday nursing practice.
Energy Technology Data Exchange (ETDEWEB)
Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)
2003-06-15
In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters
International Nuclear Information System (INIS)
Kang, Ho Yang; Kim, Ki Bok
2003-01-01
In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters
The Blame Game: Performance Analysis of Speaker Diarization System Components
Huijbregts, M.A.H.; Wooters, Chuck
2007-01-01
In this paper we discuss the performance analysis of a speaker diarization system similar to the system that was submitted by ICSI at the NIST RT06s evaluation benchmark. The analysis that is based on a series of oracle experiments, provides a good understanding of the performance of each system
Partial Molar Volumes of Air-Component Gases in Several Liquid n-Alkanes and 1-Alkanols at 313.15 K
Czech Academy of Sciences Publication Activity Database
Izák, Pavel; Cibulka, I.; Heintz, A.
1995-01-01
Roč. 109, č. 2 (1995), s. 227-234 ISSN 0378-3812 Keywords : data density * partial molar volume * gas -liquid mixture Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.024, year: 1995
Duning, Thomas; Kellinghaus, Christoph; Mohammadi, Siawoosh; Schiffbauer, Hagen; Keller, Simon; Ringelstein, E Bernd; Knecht, Stefan; Deppe, Michael
2010-02-01
Conventional structural MRI fails to identify a cerebral lesion in 25% of patients with cryptogenic partial epilepsy (CPE). Diffusion tensor imaging is an MRI technique sensitive to microstructural abnormalities of cerebral white matter (WM) by quantification of fractional anisotropy (FA). The objectives of the present study were to identify focal FA abnormalities in patients with CPE who were deemed MRI negative during routine presurgical evaluation. Diffusion tensor imaging at 3 T was performed in 12 patients with CPE and normal conventional MRI and in 67 age matched healthy volunteers. WM integrity was compared between groups on the basis of automated voxel-wise statistics of FA maps using an analysis of covariance. Volumetric measurements from high resolution T1-weighted images were also performed. Significant FA reductions in WM regions encompassing diffuse areas of the brain were observed when all patients as a group were compared with controls. On an individual basis, voxel based analyses revealed widespread symmetrical FA reduction in CPE patients. Furthermore, asymmetrical temporal lobe FA reduction was consistently ipsilateral to the electroclinical focus. No significant correlations were found between FA alterations and clinical data. There were no differences in brain volumes of CPE patients compared with controls. Despite normal conventional MRI, WM integrity abnormalities in CPE patients extend far beyond the epileptogenic zone. Given that unilateral temporal lobe FA abnormalities were consistently observed ipsilateral to the seizure focus, analysis of temporal FA may provide an informative in vivo investigation into the localisation of the epileptogenic zone in MRI negative patients.
Directory of Open Access Journals (Sweden)
M.O.Baba Sheikh
2017-09-01
Full Text Available Canine parvovirus (CPV remains the most significant viral cause of haemorrhagic enteritis and bloody diarrhoea in puppies over the age of 12 weeks. The objective of the present study was to detect and genotype CPV-2 by polymerase chain reaction (PCR and to perform phylogenetic analysis using partial VP2 gene sequences. We analysed eight faecal samples of unvaccinated dogs with signs of vomiting and bloody diarrhoea during the period from December 2013 to May 2014 in different locations in Sulaimani, Kurdistan, Iraq. After PCR detection, we found that all viral sequences in our study were CPV-2b variants, which differed genetically by 0.8% to 3.6% from five commercially available vaccines. Alignment between eight nucleotides of field virus sequences showed 95% to 99.5% similarity. The phylogenetic analysis for the 8 field sequences formed two distinct clusters with two sequences belonging to strains from China and Thailand and the other six – with a strain from Egypt. Molecular characterisation and CPV typing are crucial in epidemiological studies for future prevention and control of the disease.
Pretreatment of wastewater: Optimal coagulant selection using Partial Order Scaling Analysis (POSA)
International Nuclear Information System (INIS)
Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos
2011-01-01
Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment.
Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M
2013-08-29
A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
[MEG]PLS: A pipeline for MEG data analysis and partial least squares statistics.
Cheung, Michael J; Kovačević, Natasa; Fatima, Zainab; Mišić, Bratislav; McIntosh, Anthony R
2016-01-01
The emphasis of modern neurobiological theories has recently shifted from the independent function of brain areas to their interactions in the context of whole-brain networks. As a result, neuroimaging methods and analyses have also increasingly focused on network discovery. Magnetoencephalography (MEG) is a neuroimaging modality that captures neural activity with a high degree of temporal specificity, providing detailed, time varying maps of neural activity. Partial least squares (PLS) analysis is a multivariate framework that can be used to isolate distributed spatiotemporal patterns of neural activity that differentiate groups or cognitive tasks, to relate neural activity to behavior, and to capture large-scale network interactions. Here we introduce [MEG]PLS, a MATLAB-based platform that streamlines MEG data preprocessing, source reconstruction and PLS analysis in a single unified framework. [MEG]PLS facilitates MRI preprocessing, including segmentation and coregistration, MEG preprocessing, including filtering, epoching, and artifact correction, MEG sensor analysis, in both time and frequency domains, MEG source analysis, including multiple head models and beamforming algorithms, and combines these with a suite of PLS analyses. The pipeline is open-source and modular, utilizing functions from FieldTrip (Donders, NL), AFNI (NIMH, USA), SPM8 (UCL, UK) and PLScmd (Baycrest, CAN), which are extensively supported and continually developed by their respective communities. [MEG]PLS is flexible, providing both a graphical user interface and command-line options, depending on the needs of the user. A visualization suite allows multiple types of data and analyses to be displayed and includes 4-D montage functionality. [MEG]PLS is freely available under the GNU public license (http://meg-pls.weebly.com). Copyright © 2015 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Genreith, Christoph
2015-01-01
Nuclear waste needs to be characterized for its safe handling and storage. In particular long-lived actinides render the waste characterization challenging. The results described in this thesis demonstrate that Prompt Gamma Neutron Activation Analysis (PGAA) with cold neutrons is a reliable tool for the non-destructive analysis of actinides. Nuclear data required for an accurate identification and quantification of actinides was acquired. Therefore, a sample design suitable for accurate and precise measurements of prompt γ-ray energies and partial cross sections of long-lived actinides at existing PGAA facilities was presented. Using the developed sample design the fundamental prompt γ-ray data on 237 Np, 241 Am and 242 Pu were measured. The data were validated by repetitive analysis of different samples at two individual irradiation and counting facilities - the BRR in Budapest and the FRM II in Garching near Munich. Employing cold neutrons, resonance neutron capture by low energetic resonances was avoided during the experiments. This is an improvement over older neutron activation based works at thermal reactor neutron energies. 152 prompt γ-rays of 237 Np were identified, as well as 19 of 241 Am, and 127 prompt γ-rays of 242 Pu. In all cases, both high and lower energetic prompt γ-rays were identified. The most intense line of 237 Np was observed at an energy of E γ =182.82(10) keV associated with a partial capture cross section of σ γ =22.06(39) b. The most intense prompt γ-ray lines of 241 Am and of 242 Pu were observed at E γ =154.72(7) keV with σ γ =72.80(252) b and E γ =287.69(8) keV with σ γ =7.07(12) b, respectively. The measurements described in this thesis provide the first reported quantifications on partial radiative capture cross sections for 237 Np, 241 Am and 242 Pu measured simultaneously over the large energy range from 45 keV to 12 MeV. Detailed uncertainty assessments were performed and the validity of the given uncertainties was
Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah
2018-05-22
The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.
Dynamic analysis and qualification test of nuclear components
International Nuclear Information System (INIS)
Kim, B.K.; Lee, C.H.; Park, S.H.; Kim, Y.M.; Kim, B.S.; Kim, I.G.; Chung, C.W.; Kim, Y.M.
1981-01-01
This report contains the study on the dynamic characteristics of Wolsung fuel rod and on the dynamic balancing of rotating machinery to evaluate the performance of nuclear reactor components. The study on the dynamic characteristics of Wolsung fuel rod was carried out by both experimental and theoretical methods. Forced vibration testing of actual Wolsung fuel rod using sine sweep and sine dwell excitation was conducted to find the dynamic and nonlinear characteristics of the fuel rod. The data obtained by the test were used to analyze the nonlinear impact characteristics of the fuel rod which has a motion-constraint stop in the center of the rod. The parameters used in the test were the input force level of the exciter, the clearance gap between the fuel rod and the motion constraints, and the frequencies. Test results were in good agreement with the analytical results
Dissolution And Analysis Of Yellowcake Components For Fingerprinting UOC Sources
International Nuclear Information System (INIS)
Hexel, Cole R.; Bostick, Debra A.; Kennedy, Angel K.; Begovich, John M.; Carter, Joel A.
2012-01-01
There are a number of chemical and physical parameters that might be used to help elucidate the ore body from which uranium ore concentrate (UOC) was derived. It is the variation in the concentration and isotopic composition of these components that can provide information as to the identity of the ore body from which the UOC was mined and the type of subsequent processing that has been undertaken. Oak Ridge National Laboratory (ORNL) in collaboration with Lawrence Livermore and Los Alamos National Laboratories is surveying ore characteristics of yellowcake samples from known geologic origin. The data sets are being incorporated into a national database to help in sourcing interdicted material, as well as aid in safeguards and nonproliferation activities. Geologic age and attributes from chemical processing are site-specific. Isotopic abundances of lead, neodymium, and strontium provide insight into the provenance of geologic location of ore material. Variations in lead isotopes are due to the radioactive decay of uranium in the ore. Likewise, neodymium isotopic abundances are skewed due to the radiogenic decay of samarium. Rubidium decay similarly alters the isotopic signature of strontium isotopic composition in ores. This paper will discuss the chemical processing of yellowcake performed at ORNL. Variations in lead, neodymium, and strontium isotopic abundances are being analyzed in UOC from two geologic sources. Chemical separation and instrumental protocols will be summarized. The data will be correlated with chemical signatures (such as elemental composition, uranium, carbon, and nitrogen isotopic content) to demonstrate the utility of principal component and cluster analyses to aid in the determination of UOC provenance.
Al-Gburi, A.; Freeman, C. T.; French, M. C.
2018-06-01
This paper uses gap metric analysis to derive robustness and performance margins for feedback linearising controllers. Distinct from previous robustness analysis, it incorporates the case of output unstructured uncertainties, and is shown to yield general stability conditions which can be applied to both stable and unstable plants. It then expands on existing feedback linearising control schemes by introducing a more general robust feedback linearising control design which classifies the system nonlinearity into stable and unstable components and cancels only the unstable plant nonlinearities. This is done in order to preserve the stabilising action of the inherently stabilising nonlinearities. Robustness and performance margins are derived for this control scheme, and are expressed in terms of bounds on the plant nonlinearities and the accuracy of the cancellation of the unstable plant nonlinearity by the controller. Case studies then confirm reduced conservatism compared with standard methods.
Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie
2011-11-01
The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.
Directory of Open Access Journals (Sweden)
Zhiqiang Shen
2012-01-01
Full Text Available Deformation of partially composite beams under distributed loading and free vibrations of partially composite beams under various boundary conditions are examined in this paper. The weak-form quadrature element method, which is characterized by direct evaluation of the integrals involved in the variational description of a problem, is used. One quadrature element is normally sufficient for a partially composite beam regardless of the magnitude of the shear connection stiffness. The number of integration points in a quadrature element is adjustable in accordance with convergence requirement. Results are compared with those of various finite element formulations. It is shown that the weak form quadrature element solution for partially composite beams is free of slip locking, and high computational accuracy is achieved with smaller number of degrees of freedom. Besides, it is found that longitudinal inertia of motion cannot be simply neglected in assessment of dynamic behavior of partially composite beams.
Ramakrishaniah, Ravikumar; Al Kheraif, Abdulaziz A; Elsharawy, Mohamed A; Alsaleh, Ayman K; Ismail Mohamed, Karem M; Rehman, Ihtesham Ur
2015-05-01
The purpose of this study was to investigate and compare the load distribution and displacement of cantilever prostheses with and without glass abutment by three dimensional finite element analysis. Micro-computed tomography was used to study the relationship between the glass abutment and the ridge. The external surface of the maxilla was scanned, and a simplified finite element model was constructed. The ZX-27 glass abutment and the maxillary first and second premolars were created and modified. The solid model of the three-unit cantilever fixed partial denture was scanned, and the fitting surface was modified with reference to the created abutments using the 3D CAD system. The finite element analysis was completed in ANSYS. The fit and total gap volume between the glass abutment and dental model were determined by Skyscan 1173 high-energy spiral micro-CT scan. The results of the finite element analysis in this study showed that the cantilever prosthesis supported by the glass abutment demonstrated significantly less stress on the terminal abutment and overall deformation of the prosthesis under vertical and oblique load. Micro-computed tomography determined a gap volume of 6.74162 mm(3). By contacting the mucosa, glass abutments transfer some amount of masticatory load to the residual alveolar ridge, thereby preventing damage to the periodontal microstructures of the terminal abutment. The passive contact of the glass abutment with the mucosa not only preserves the health of the mucosa covering the ridge but also permits easy cleaning. It is possible to increase the success rate of cantilever FPDs by supporting the cantilevered pontic with glass abutments. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Topochemical Analysis of Cell Wall Components by TOF-SIMS.
Aoki, Dan; Fukushima, Kazuhiko
2017-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.
Study of NΣ cusp in p+p → p+K{sup +}+Λ with partial wave analysis
Energy Technology Data Exchange (ETDEWEB)
Lu, S.; Muenzer, R.; Epple, E.; Fabbietti, L. [Excellenz Cluster Universe, Technische Universitaet Muenchen (Germany); Ritman, J.; Roderburg, E.; Hauenstein, F. [FZ Juelich (Germany); Collaboration: Hades and FOPI Collaboration
2016-07-01
In the last years, an analysis of exclusive reaction of p+p → p+K{sup +}+Λ has been carried out using Bonn-Gatchina Partial Wave Analysis. In a combined analysis of data from Hades, Fopi, Disto and Cosy-TOF, an energy dependent production process is determined. This analysis has shown that a sufficient description of the p+p → p+K{sup +}+Λ is quite challenging due to the presence of resonances N* and interference, which requires Partial Wave Analysis. A pronounced narrow structure is observed in its projection on the pΛ-invariant mass. This peak structure, which appears around the NΣ threshold, has a strongly asymmetric structure and is interpreted a NΣ cusp effect. In this talk, the results from a combined analysis will be shown, with a special focus on the NΣ cusp structure and a description using Flatte parametrization.
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Multilevel component analysis of time-resolved metabolic fingerprinting data
Jansen, J.J.; Hoefsloot, H.C.J.; Greef, J. van der; Timmerman, M.E.; Smilde, A.K.
2005-01-01
Genomics-based technologies in systems biology have gained a lot of popularity in recent years. These technologies generate large amounts of data. To obtain information from this data, multivariate data analysis methods are required. Many of the datasets generated in genomics are multilevel
A Principal Components Analysis of the Rathus Assertiveness Schedule.
Law, H. G.; And Others
1979-01-01
Investigated the adequacy of the Rathus Assertiveness Schedule (RAS) as a global measure of assertiveness. Analysis indicated that the RAS does not provide a unidimensional index of assertiveness, but rather measures a number of factors including situation-specific assertive behavior, aggressiveness, and a more general assertiveness. (Author)
Interoperability Assets for Patient Summary Components: A Gap Analysis.
Heitmann, Kai U; Cangioli, Giorgio; Melgara, Marcello; Chronaki, Catherine
2018-01-01
The International Patient Summary (IPS) standards aim to define the specifications for a minimal and non-exhaustive Patient Summary, which is specialty-agnostic and condition-independent, but still clinically relevant. Meanwhile, health systems are developing and implementing their own variation of a patient summary while, the eHealth Digital Services Infrastructure (eHDSI) initiative is deploying patient summary services across countries in the Europe. In the spirit of co-creation, flexible governance, and continuous alignment advocated by eStandards, the Trillum-II initiative promotes adoption of the patient summary by engaging standards organizations, and interoperability practitioners in a community of practice for digital health to share best practices, tools, data, specifications, and experiences. This paper compares operational aspects of patient summaries in 14 case studies in Europe, the United States, and across the world, focusing on how patient summary components are used in practice, to promote alignment and joint understanding that will improve quality of standards and lower costs of interoperability.
Growth and proteomic analysis of tomato fruit under partial root-zone drying.
Marjanović, Milena; Stikić, Radmila; Vucelić-Radović, Biljana; Savić, Sladjana; Jovanović, Zorica; Bertin, Nadia; Faurobert, Mireille
2012-06-01
The effects of partial root-zone drying (PRD) on tomato fruit growth and proteome in the pericarp of cultivar Ailsa Craig were investigated. The PRD treatment was 70% of water applied to fully irrigated (FI) plants. PRD reduced the fruit number and slightly increased the fruit diameter, whereas the total fruit fresh weight (FW) and dry weight (DW) per plant did not change. Although the growth rate was higher in FI than in PRD fruits, the longer period of cell expansion resulted in bigger PRD fruits. Proteins were extracted from pericarp tissue at two fruit growth stages (15 and 30 days post-anthesis [dpa]), and submitted to proteomic analysis including two-dimensional gel electrophoresis and mass spectrometry for identification. Proteins related to carbon and amino acid metabolism indicated that slower metabolic flux in PRD fruits may be the cause of a slower growth rate compared to FI fruits. The increase in expression of the proteins related to cell wall, energy, and stress defense could allow PRD fruits to increase the duration of fruit growth compared to FI fruits. Upregulation of some of the antioxidative enzymes during the cell expansion phase of PRD fruits appears to be related to their role in protecting fruits against the mild stress induced by PRD.
International Nuclear Information System (INIS)
Kim, Jong-Yun; Choi, Yong Suk; Park, Yong Joon; Jung, Sung-Hee
2009-01-01
Neutron spectrometry, based on the scattering of high energy fast neutrons from a radioisotope and slowing-down by the light hydrogen atoms, is a useful technique for non-destructive, quantitative measurement of hydrogen content because it has a large measuring volume, and is not affected by temperature, pressure, pH value and color. The most common choice for radioisotope neutron source is 252 Cf or 241 Am-Be. In this study, 252 Cf with a neutron flux of 6.3x10 6 n/s has been used as an attractive neutron source because of its high flux neutron and weak radioactivity. Pulse-height neutron spectra have been obtained by using in-house built radioisotopic neutron spectrometric system equipped with 3 He detector and multi-channel analyzer, including a neutron shield. As a preliminary study, polyethylene block (density of ∼0.947 g/cc and area of 40 cmx25 cm) was used for the determination of hydrogen content by using multivariate calibration models, depending on the thickness of the block. Compared with the results obtained from a simple linear calibration model, partial least-squares regression (PLSR) method offered a better performance in a quantitative data analysis. It also revealed that the PLSR method in a neutron spectrometric system can be promising in the real-time, online monitoring of the powder process to determine the content of any type of molecules containing hydrogen nuclei.
International Nuclear Information System (INIS)
Xia, X.H.; Chen, Y.B.; Li, J.S.; Tasawar, H.; Alsaedi, A.; Chen, G.Q.
2014-01-01
To cope with the excessive growth of energy consumption, the Chinese government has been trying to strengthen the energy regulation system by introducing new initiatives that aim at controlling the total amount of energy consumption. A partial frontier analysis is performed in this paper to make a comparative assessment of the combinations of possible energy conservation objectives, new constraints and regulation strategies. According to the characteristics of the coordination of existing regulation structure and the optimality of regulation strategy, four scenarios are constructed and regional responsibilities are reasonably divided by fully considering the production technology in the economy. The relative importance of output objectives and the total amount controlling is compared and the impacts on the regional economy caused by the changes of regulation strategy are also evaluated for updating regulation policy. - Highlights: • New initiatives to control the total amount of energy consumption are evaluated. • Twenty-four regulation strategies and four scenarios are designed and compared. • Crucial regions for each sector and regional potential are identified. • The national goals of energy abatement are decomposed into regional responsibilities. • The changes of regulation strategy are evaluated for updating regulation policy
General partial wave analysis of the decay of a hyperon of spin 1/2
International Nuclear Information System (INIS)
Lee, T.D.; Yang, C.N.
1983-01-01
This note is to consider the general problem of the decay of a hyperon of spin 1/2 into a pion and a nucleon under the general assumption of possible violations of parity conservation, charge-conjugation invariance, and time-reversal invariance. The discussion is in essence a partial wave analysis of the decay phenomena and is independent of the dynamics of the decay. Nonrelativistic approximations are not made on either of the decay products. In the reference system in which the hyperon is at rest there are two possible final states of the pion-nucleon system:s/sub 1/2/ and p/sub 1/2/. Denoting the amplitudes of these two states by A and B, one observes that the decay is physically characterized by three real constants specifying the magnitudes and the relative phase between these amplitudes. One of these constants can be taken to be absolute value a 2 + absolute value B 2 , and is evidently proportional to the decay probability per unit time. The other two constants are best defined in terms of experimentally measurable quantities. They discuss three types of experiments: (a) The angular distribution of the decay pion from a completely polarized hyperon at rest. (b) The longitudinal polarization of the nucleon emitted in the decay of unpolarized hyperons at rest. (c) Transverse polarization of the nucleon emitted in a given direction in the decay of a polarized hyperon
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
International Nuclear Information System (INIS)
Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo
2011-01-01
Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed
Predictors of Local Recurrence Following Accelerated Partial Breast Irradiation: A Pooled Analysis
International Nuclear Information System (INIS)
Shah, Chirag; Wilkinson, John Ben; Lyden, Maureen; Beitsch, Peter; Vicini, Frank A.
2012-01-01
Purpose: To analyze a pooled set of nearly 2,000 patients treated on the American Society of Breast Surgeons (ASBS) Mammosite Registry Trial and at William Beaumont Hospital (WBH) to identify factors associated with local recurrence following accelerated partial breast irradiation (APBI). Methods and Materials: A total of 1,961 women underwent partial breast irradiation between April 1993 and November 2010 as part of the ASBS Registry Trial or at WBH. Rates of ipsilateral breast tumor recurrence (IBTR), regional recurrence (RR), distant metastases (DM), disease-free survival (DFS), cause-specific survival (CSS), and overall survival (OS) were analyzed for each group and for the pooled cohort. Clinical, pathologic, and treatment-related variables were analyzed including age, tumor stage/size, estrogen receptor status, surgical margins, and lymph node status to determine their association with IBTR. Results: The two groups weres similar, but WBH patients were more frequently node positive, had positive margins, and were less likely to be within the American Society for Radiation Oncology-unsuitable group. At 5 years, the rates of IBTR, RR, DM, DFS, CSS, and OS for the pooled group of patients were 2.9%, 0.5%, 2.4%, 89.1%, 98.5%, and 91.8%, respectively. The 5-year rate of true recurrence/marginal miss was 0.8%. Univariate analysis of IBTR found that negative estrogen receptor status (odds ratio [OR], 2.83, 95% confidence interval 1.55–5.13, p = 0.0007) was the only factor significantly associated with IBTR, while a trend was seen for age less than 50 (OR 1.80, 95% confidence interval 0.90–3.58, p = 0.10). Conclusions: Excellent 5-year outcomes were seen following APBI in over 1,900 patients. Estrogen receptor negativity was the only factor associated with IBTR, while a trend for age less than 50 was noted. Significant differences in factors associated with IBTR were noted between cohorts, suggesting that factors driving IBTR may be predicated based on the risk
PV System Component Fault and Failure Compilation and Analysis.
Energy Technology Data Exchange (ETDEWEB)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
2018-02-01
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
INTEGRATION OF SYSTEM COMPONENTS AND UNCERTAINTY ANALYSIS - HANFORD EXAMPLES
International Nuclear Information System (INIS)
Wood, M.I.
2009-01-01
(sm b ullet) Deterministic 'One Off' analyses as basis for evaluating sensitivity and uncertainty relative to reference case (sm b ullet) Spatial coverage identical to reference case (sm b ullet) Two types of analysis assumptions - Minimax parameter values around reference case conditions - 'What If' cases that change reference case condition and associated parameter values (sm b ullet) No conclusions about likelihood of estimated result other than' qualitative expectation that actual outcome should tend toward reference case estimate
Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando
2013-01-01
Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...
Principle Component Analysis of AIRS and CrIS Data
Aumann, H. H.; Manning, Evan
2015-01-01
Synthetic Eigen Vectors (EV) used for the statistical analysis of the PC reconstruction residual of large ensembles of data are a novel tool for the analysis of data from hyperspectral infrared sounders like the Atmospheric Infrared Sounder (AIRS) on the EOS Aqua and the Cross-track Infrared Sounder (CrIS) on the SUOMI polar orbiting satellites. Unlike empirical EV, which are derived from the observed spectra, the synthetic EV are derived from a large ensemble of spectra which are calculated assuming that, given a state of the atmosphere, the spectra created by the instrument can be accurately calculated. The synthetic EV are then used to reconstruct the observed spectra. The analysis of the differences between the observed spectra and the reconstructed spectra for Simultaneous Nadir Overpasses of tropical oceans reveals unexpected differences at the more than 200 mK level under relatively clear conditions, particularly in the mid-wave water vapor channels of CrIS. The repeatability of these differences using independently trained SEV and results from different years appears to rule out inconsistencies in the radiative transfer algorithm or the data simulation. The reasons for these discrepancies are under evaluation.
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...
Indian Academy of Sciences (India)
Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...
Tsivian, Matvey; Ulusoy, Said; Abern, Michael; Wandel, Ayelet; Sidi, A Ami; Tsivian, Alexander
2012-10-01
Anatomic parameters determining renal mass complexity have been used in a number of proposed scoring systems despite lack of a critical analysis of their independent contributions. We sought to assess the independent contribution of anatomic parameters on perioperative outcomes of laparoscopic partial nephrectomy (LPN). Preoperative imaging studies were reviewed for 147 consecutive patients undergoing LPN for a single renal mass. Renal mass anatomy was recorded: Size, growth pattern (endo-/meso-/exophytic), centrality (central/hilar/peripheral), anterior/posterior, lateral/medial, polar location. Multivariable models were used to determine associations of anatomic parameters with warm ischemia time (WIT), operative time (OT), estimated blood loss (EBL), intra- and postoperative complications, as well as renal function. All models were adjusted for the learning curve and relevant confounders. Median (range) tumor size was 3.3 cm (1.5-11 cm); 52% were central and 14% hilar. While 44% were exophytic, 23% and 33% were mesophytic and endophytic, respectively. Anatomic parameters did not uniformly predict perioperative outcomes. WIT was associated with tumor size (P=0.068), centrality (central, P=0.016; hilar, P=0.073), and endophytic growth pattern (P=0.017). OT was only associated with tumor size (Panatomic parameter predicted EBL. Tumor centrality increased the odds of overall and intraoperative complications, without reaching statistical significance. Postoperative renal function was not associated with any of the anatomic parameters considered after adjustment for baseline function and WIT. Learning curve, considered as a confounder, was independently associated with reduced WIT and OT as well as reduced odds of intraoperative complications. This study provides a detailed analysis of the independent impact of renal mass anatomic parameters on perioperative outcomes. Our findings suggest diverse independent contributions of the anatomic parameters to the
Simoens, Steven
2011-01-01
This study aims to compute the budget impact of lacosamide, a new adjunctive therapy for partial-onset seizures in epilepsy patients from 16 years of age who are uncontrolled and having previously used at least three anti-epileptic drugs from a Belgian healthcare payer perspective. The budget impact analysis compared the 'world with lacosamide' to the 'world without lacosamide' and calculated how a change in the mix of anti-epileptic drugs used to treat uncontrolled epilepsy would impact drug spending from 2008 to 2013. Data on the number of patients and on the market shares of anti-epileptic drugs were taken from Belgian sources and from the literature. Unit costs of anti-epileptic drugs originated from Belgian sources. The budget impact was calculated from two scenarios about the market uptake of lacosamide. The Belgian target population is expected to increase from 5333 patients in 2008 to 5522 patients in 2013. Assuming that the market share of lacosamide increases linearly over time and is taken evenly from all other anti-epileptic drugs (AEDs), the budget impact of adopting adjunctive therapy with lacosamide increases from €5249 (0.1% of reference drug budget) in 2008 to €242,700 (4.7% of reference drug budget) in 2013. Assuming that 10% of patients use standard AED therapy plus lacosamide, the budget impact of adopting adjunctive therapy with lacosamide is around €800,000-900,000 per year (or 16.7% of the reference drug budget). Adjunctive therapy with lacosamide would raise drug spending for this patient population by as much as 16.7% per year. However, this budget impact analysis did not consider the fact that lacosamide reduces costs of seizure management and withdrawal. The literature suggests that, if savings in other healthcare costs are taken into account, adjunctive therapy with lacosamide may be cost saving.
Nuclear plant components: mechanical analysis and lifetime evaluation
International Nuclear Information System (INIS)
Chator, T.
1993-09-01
This paper concerns the methodology adopted by the Research and Development Division to handle mechanical problems found in structures and machines. Usually, these often very complex studies (3-D structures, complex loadings, non linear behavior laws) call for advanced tools and calculation means. In order to do these complex studies, R and D Division is developing a software. It handles very complex thermo-mechanical analysis using the Finite Element Method. It enables us to analyse static, dynamic, elasto-plastic problems as well as contact problems or evaluating damage and lifetime of structures. This paper will be illustrated by actual industrial case examples. The major ones will be dealing with: 1. Analysis of a new impeller/shaft assembly of a primary coolant pump. The 3D meshing is submitted simultaneously to thermal load, pressure, hydraulic, centrifugal and axial forces and clamping of studs; contacts between shaft/impeller, nuts bearing side/shaft bearing side. For this study, we have developed a new method to handle the clamping of studs. The stud elongation value is given into the software which automatically computes the distorsions between both the structures in contact and then the final position of bearing areas (using an iterative non-linear algorithm of modified Newton-Raphson type). 2. Analysis of the stress intensity factor of crack. The 3D meshing (representing the crack) is submitted simultaneously to axial and radial forces. In this case, we use the Theta method to calculate the energy restitution rate in order to determine the stress intensity factors. (authors). 7 figs., 1 tab., 3 refs
Blind Component Separation in Wavelet Space: Application to CMB Analysis
Directory of Open Access Journals (Sweden)
J. Delabrouille
2005-09-01
Full Text Available It is a recurrent issue in astronomical data analysis that observations are incomplete maps with missing patches or intentionally masked parts. In addition, many astrophysical emissions are nonstationary processes over the sky. All these effects impair data processing techniques which work in the Fourier domain. Spectral matching ICA (SMICA is a source separation method based on spectral matching in Fourier space designed for the separation of diffuse astrophysical emissions in cosmic microwave background observations. This paper proposes an extension of SMICA to the wavelet domain and demonstrates the effectiveness of wavelet-based statistics for dealing with gaps in the data.
Importance Analysis of In-Service Testing Components for Ulchin Unit 3
International Nuclear Information System (INIS)
Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha
2002-01-01
We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)
Sronsri, Chuchai; Boonchom, Banjong
2018-04-01
A simple precipitating method was used to synthesize effectively a partially metal-doped phosphate hydrate (Mn0.9Mg0.1HPO4·3H2O), whereas the thermal decomposition process of the above hydrate precursor was used to obtain Mn1.8Mg0.2P2O7 and LiMn0.9Mg0.1PO4 compounds under different conditions. To separate the overlapping thermal decomposition peak, a deconvolution technique was used, and the separated peak was applied to calculate the water content. The factor group splitting analysis was used to exemplify their vibrational spectra obtained from normal vibrations of HPO42-, H2O, P2O74- and PO43- functional groups. Further, the deconvoluted bending mode of water was clearly observed. Mn0.9Mg0.1HPO4·3H2O was observed in the orthorhombic crystal system with the space group of Pbca (D2h15). The formula units per unit cell were found to be eight (Z = 8), and the site symmetric type of HPO42- was observed as Cs. For the HPO42- unit, the correlation filed splitting analysis of type C3v - Cs - D2h15 was calculated and had 96 internal modes, whereas H2O in the above hydrate was symbolized as C2v - Cs - D2h15 and had 24 modes. The symbol C2v - Cs - C2h3 was used for the correlation filed splitting analysis of P2O74- in Mn1.8Mg0.2P2O7 (monoclinic, C2/m (C2h3), Z = 2, and 42 modes). Finally, the symbol Td - Cs - D2h16 was used for the correlation filed splitting analysis of PO43- in LiMn0.9Mg0.1PO4 (orthorhombic, Pnma (D2h16), Z = 4, and 36 modes).
Modeling and Analysis of Component Faults and Reliability
DEFF Research Database (Denmark)
Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter
2016-01-01
This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....
Directory of Open Access Journals (Sweden)
R. Ramaprabha
2015-06-01
Full Text Available Mismatching effects due to partial shaded conditions are the major drawbacks existing in today’s photovoltaic (PV systems. These mismatch effects are greatly reduced in distributed PV system architecture where each panel is effectively decoupled from its neighboring panel. To obtain the optimal operation of the PV panels, maximum power point tracking (MPPT techniques are used. In partial shaded conditions, detecting the maximum operating point is difficult as the characteristic curves are complex with multiple peaks. In this paper, a neural network control technique is employed for MPPT. Detailed analyses were carried out on MPPT controllers in centralized and distributed architecture under partial shaded environments. The efficiency of the MPPT controllers and the effectiveness of the proposed control technique under partial shaded environments was examined using MATLAB software. The results were validated through experimentation.
Izadi Najafabadi, Mohammad; Somers, Bart; Johansson, Bengt; Dam, Nico
2017-01-01
A relatively high level of stratification (qualitatively: lack of homogeneity) is one of the main advantages of partially premixed combustion over the homogeneous charge compression ignition concept. Stratification can smooth the heat release rate
The Coopersmith Self-Esteem Inventory: analysis and partial validation of a modified adult form.
Myhill, J; Lorr, M
1978-01-01
Determined the factor structure of an adult form of the Coopersmith Self-Esteem Inventory (SEI), tested several hypotheses related to its content, and assessed the utility of the five derived scores for differentiating psychiatric outpatients from normals. The modified Self-Esteem Inventory and six other scales were completed by 200 local-government employees. A principal components analysis of correlations among 58 SEI items and two marker variables revealed five factors. The rotated dimensions were labelled (1) anxiety; (2) defensiveness; (3) negative social attitude; (4) rejection of self; and (5) inadequacy of self. Fifty psychiatric outpatients were compared with 100 normals with respect to the five derived factor scores. Tests of significance indicated that the two groups differed significantly on all measures except the defensiveness or lie scale factor. It is concluded that the Coopersmith Inventory is complex and measures several characteristics in addition to self-esteem.
Henseler, Jorg; Chin, Wynne W.
2010-01-01
In social and business sciences, the importance of the analysis of interaction effects between manifest as well as latent variables steadily increases. Researchers using partial least squares (PLS) to analyze interaction effects between latent variables need an overview of the available approaches as well as their suitability. This article…
International Nuclear Information System (INIS)
Diaz Sanchidrian, C.
1989-01-01
The present paper applies dimensional analysis with spatial discrimination to transform the differential equations in partial derivatives developed in the theory of heat transmission into ordinary ones. The effectivity of the method is comparable to that methods based in transformations of uni or multiparametric groups, with the advantage of being more direct and simple. (Author)
Lopes da Silva, F.H.; Vos, J.E.; Mooibroek, J.; Rotterdam, A. van
1980-01-01
The thalamo-cortical relationships of alpha rhythms have been analysed in dogs using partial coherence function analysis. The objective was to clarify how far the large intracortical coherence commonly recorded between different cortical sites could depend on a common thalamic site. It was found
Thermal Analysis of Fermilab Mu2e Beamstop and Structural Analysis of Beamline Components
Energy Technology Data Exchange (ETDEWEB)
Narug, Colin S. [Northern Illinois U.
2018-01-01
The Mu2e project at Fermilab National Accelerator Laboratory aims to observe the unique conversion of muons to electrons. The success or failure of the experiment to observe this conversion will further the understanding of the standard model of physics. Using the particle accelerator, protons will be accelerated and sent to the Mu2e experiment, which will separate the muons from the beam. The muons will then be observed to determine their momentum and the particle interactions occur. At the end of the Detector Solenoid, the internal components will need to absorb the remaining particles of the experiment using polymer absorbers. Because the internal structure of the beamline is in a vacuum, the heat transfer mechanisms that can disperse the energy generated by the particle absorption is limited to conduction and radiation. To determine the extent that the absorbers will heat up over one year of operation, a transient thermal finite element analysis has been performed on the Muon Beam Stop. The levels of energy absorption were adjusted to determine the thermal limit for the current design. Structural finite element analysis has also been performed to determine the safety factors of the Axial Coupler, which connect and move segments of the beamline. The safety factor of the trunnion of the Instrument Feed Through Bulk Head has also been determined for when it is supporting the Muon Beam Stop. The results of the analysis further refine the design of the beamline components prior to testing, fabrication, and installation.
Data-Parallel Mesh Connected Components Labeling and Analysis
Energy Technology Data Exchange (ETDEWEB)
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
A Bayesian Analysis of Unobserved Component Models Using Ox
Directory of Open Access Journals (Sweden)
Charles S. Bos
2011-05-01
Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.
Determination of inorganic component in plastics by neutron activation analysis
International Nuclear Information System (INIS)
Mateus, Sandra Fonseca; Saiki, Mitiko
1995-01-01
In order to identify possible sources of heavy metals in municipal solid waste incinerator ashes, plastic materials originated mainly from household waste were analyzed by using instrumental neutron activation analysis method. Plastic samples and synthetic standards of elements were irradiated at the IEA-R1 nuclear reactor for 8 h under thermal neutron flux of about 10 13 n cm -2 s -1 . After adequate decay time, counting were carried out using a hyperpure Ge detector and the concentrations of the elements As, Ba, Br, Cd, Co, Cr, Fe, Sb, Sc, Se, Sn, Ti and Zn were determined. For some samples, not all these elements were detected. Besides, the range of concentrations determined in similar type and colored samples varied from a few ppb to percentage. In general, colored and opaque plastic samples presented higher concentrations of the elements than those obtained from transparent and milky plastics. Precision of the results was also evaluated. (author). 3 refs., 2 tabs
Component Analysis of Bee Venom from lune to September
Directory of Open Access Journals (Sweden)
Ki Rok Kwon
2007-06-01
Full Text Available Objectives : The aim of this study was to observe variation of Bee Venom content from the collection period. Methods : Content analysis of Bee Venom was rendered using HPLC method by standard melittin Results : Analyzing melittin content using HPLC, 478.97mg/g at june , 493.89mg/g at july, 468.18mg/g at August and 482.15mg/g was containing in Bee Venom at september. So the change of melittin contents was no significance from June to September. Conclusion : Above these results, we concluded carefully that collecting time was not important factor for the quality control of Bee Venom, restricted the period from June to September.
Institute of Scientific and Technical Information of China (English)
谢佑卿
2011-01-01
在系统合金科学框架中建立有关无序合金的平均摩尔性质(体积和势能)的函数.通过对这些函数进行推导,可以得到平均摩尔体积函数、偏摩尔体积函数及派生出与成分相关的函数.在组元的偏摩尔性质和平均摩尔性质之间的普适方程、差分方程、在偏摩尔性质和平均摩尔性质之间不同参数的约束方程和普适的Gibbs-Duhem公式.可以证明从合金平均摩尔性质的不同函数计算的偏摩尔性质是相等的,但总体来说偏摩尔性质不等于给定组元的平均摩尔性质,即偏摩尔性质不能代表相应组元的摩尔性质.通过计算Au-Ni系中组元的偏摩尔体积和平均原子体积以及合金的平均原子体积,证明所建立的公式和函数的正确性.%In the framework of systematic science of alloys,the average molar property (volume and potential energy) functions of disordered alloys were established.From these functions,the average molar property functions,partial molar property functions,derivative functions with respect to composition,general equation of relationship between partial and average molar properties of components,difference equation and constraining equation of different values between partial and average molar properties,as well as general Gibbs-Duhem formula were derived.It was proved that the partial molar properties calculated from various combinative functions of average molar properties of alloys are equal,but in general,the partial molar properties are not equal to the average molar properties of a given component.This means that the partial molar properties cannot represent the corresponding properties of the component.All the equations and functions established in this work were proved to be correct by calculating the results of partial and average atomic volumes of components as well as average atomic volumes of alloys in the Au-Ni system.
International Nuclear Information System (INIS)
Bakalis, Diamantis P.; Stamatis, Anastassios G.
2012-01-01
Highlights: ► Hybrid SOFC/GT system based on existing components. ► Exergy analysis using AspenPlus™ software. ► Greenhouse gases emission is significantly affected by SOFC stack temperature. ► Comparison with a conventional GT of similar power. ► SOFC/GT is almost twice efficient in terms of second low efficiency and CO 2 emission. - Abstract: The paper deals with the examination of a hybrid system consisting of a pre-commercially available high temperature solid oxide fuel cell and an existing recuperated microturbine. The irreversibilities and thermodynamic inefficiencies of the system are evaluated after examining the full and partial load exergetic performance and estimating the amount of exergy destruction and the efficiency of each hybrid system component. At full load operation the system achieves an exergetic efficiency of 59.8%, which increases during the partial load operation, as a variable speed control method is utilized. Furthermore, the effects of the various performance parameters such as fuel cell stack temperature and fuel utilization factor are assessed. The results showed that the components in which chemical reactions occur have the higher exergy destruction rates. The exergetic performance of the system is affected significantly by the stack temperature. Based on the exergetic analysis, suggestions are given for reducing the overall system irreversibility. Finally, the environmental impact of the operation of the hybrid system is evaluated and compared with a similarly rated conventional gas turbine plant. From the comparison it is apparent that the hybrid system obtains nearly double exergetic efficiency and about half the amount of greenhouse gas emissions compared with the conventional plant.
Competition analysis on the operating system market using principal component analysis
Directory of Open Access Journals (Sweden)
Brătucu, G.
2011-01-01
Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard
2012-01-01
We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...
Independent component analysis of dynamic contrast-enhanced computed tomography images
Energy Technology Data Exchange (ETDEWEB)
Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)
2006-10-07
Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)
Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis
Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.;
2015-01-01
The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.
Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Decoding the auditory brain with canonical component analysis.
de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund
2018-05-15
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Using principal component analysis for selecting network behavioral anomaly metrics
Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex
2010-04-01
This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Gourdol, L.; Hissler, C.; Pfister, L.
2012-04-01
The Luxembourg sandstone aquifer is of major relevance for the national supply of drinking water in Luxembourg. The city of Luxembourg (20% of the country's population) gets almost 2/3 of its drinking water from this aquifer. As a consequence, the study of both the groundwater hydrochemistry, as well as its spatial and temporal variations, are considered as of highest priority. Since 2005, a monitoring network has been implemented by the Water Department of Luxembourg City, with a view to a more sustainable management of this strategic water resource. The data collected to date forms a large and complex dataset, describing spatial and temporal variations of many hydrochemical parameters. The data treatment issue is tightly connected to this kind of water monitoring programs and complex databases. Standard multivariate statistical techniques, such as principal components analysis and hierarchical cluster analysis, have been widely used as unbiased methods for extracting meaningful information from groundwater quality data and are now classically used in many hydrogeological studies, in particular to characterize temporal or spatial hydrochemical variations induced by natural and anthropogenic factors. But these classical multivariate methods deal with two-way matrices, usually parameters/sites or parameters/time, while often the dataset resulting from qualitative water monitoring programs should be seen as a datacube parameters/sites/time. Three-way matrices, such as the one we propose here, are difficult to handle and to analyse by classical multivariate statistical tools and thus should be treated with approaches dealing with three-way data structures. One possible analysis approach consists in the use of partial triadic analysis (PTA). The PTA was previously used with success in many ecological studies but never to date in the domain of hydrogeology. Applied to the dataset of the Luxembourg Sandstone aquifer, the PTA appears as a new promising statistical
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
International Nuclear Information System (INIS)
Choi, S. Y.; Han, S. H.
2004-01-01
The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data
Analysis Components of the Digital Consumer Behavior in Romania
Directory of Open Access Journals (Sweden)
Cristian Bogdan Onete
2016-08-01
Full Text Available This article is investigating the Romanian consumer behavior in the context of the evolution of the online shopping. Given that online stores are a profitable business model in the area of electronic commerce and because the relationship between consumer digital Romania and its decision to purchase products or services on the Internet has not been sufficiently explored, this study aims to identify specific features of the new type of consumer and to examine the level of online shopping in Romania. Therefore a documentary study was carried out with statistic data regarding the volume and the number of transactions of the online shopping in Romania during 2010-2014, the type of products and services that Romanians are searching the Internet for and demographics of these people. In addition, to study more closely the online consumer behavior, and to interpret the detailed secondary data provided, an exploratory research was performed as a structured questionnaire with five closed questions on the distribution of individuals according to the gender category they belong (male or female; decision to purchase products / services in the virtual environment in the past year; the source of the goods / services purchased (Romanian or foreign sites; factors that have determined the consumers to buy products from foreign sites; categories of products purchased through online transactions from foreign merchants. The questionnaire was distributed electronically via Facebook social network users and the data collected was processed directly in the Facebook official app to create and interpret responses to surveys. The results of this research correlated with the official data reveals the following characteristics of the digital consumer in Romania: atypical European consumer, interested more in online purchases from abroad, influenced by the quality and price of the purchase. This paper assumed a careful analysis of the online acquisitions phenomenon and also
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2001-01-01
) and non-negative least squares (NNLS), and the partial unmixing methods orthogonal subspace projection (OSP), constrained energy minimization (CEM) and an eigenvalue formulation alternative are dealt with. The solution to the eigenvalue formulation alternative proves to be identical to the CEM solution....... The matrix inversion involved in CEM can be avoided by working on (a subset of) orthogonally transformed data such as signal maximum autocorrelation factors, MAFs, or signal minimum noise fractions, MNFs. This will also cause the partial unmixing result to be independent of the noise isolated in the MAF....../MNFs not included in the analysis. CEM and the eigenvalue formulation alternative enable us to perform partial unmixing when we know one desired end-member spectrum only and not the full set of end-member spectra. This is an advantage over full unmixing and OSP. The eigenvalue formulation of CEM inspires us...
Consideration on the partial moderation in criticality safety analysis of LWR fresh fuel storage
International Nuclear Information System (INIS)
Tanaka, S.; Tanimoto, R.; Suzuki, K.; Ishitobi, M.
1987-01-01
In criticality safety analyses of fuel fabrication facilities, neutron effective multiplication factor (k eff ) of a storage vault has been calculated assuming ''partial moderation'' in whole space (hereafter reffered to as unlimited partial moderation). Where the enrichment of fuels to be stored is about 3.5 % or less, calculated k eff is usually low enough to show subcriticality even in unlimited partial moderation. However, it is scheduled to elevate LWR fuels enrichment for economical higher burnup and the unlimited partial moderation would require to introduce neutron absorbers to maintain subcriticality. It is clear that this causes economical disadvantages, and hence we reconsidered this assumption to avoid such a condition. Reconsideration of the unlimited partial moderation was carried out in following steps. (1) Water quantity to be assumed in atmosphere to obtain criticality was revealed too much to realize. (2) Typical realistic water quantity in atmosphere was estimated to apply as an alternative assumption. (3) A fresh fuel assembly storage was chosen as a model array and calculations with lattice code WIMS-D 1 and Monte Calro code KENO-IV 2 were performed to compare new alternative assumption with the unlimited one. As results of the above calculations, maximum k eff of the array under the new assumption was remarkably reduced to the value less than 0.95 though the maximum k eff under the unlimited one was higher than 1.0. (author)
Research on criticality analysis method of CNC machine tools components under fault rate correlation
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Experimental and simulation analysis of hydrogen production by partial oxidation of methanol
Energy Technology Data Exchange (ETDEWEB)
Sikander, U. [National Univ. of Science and Technology, Islamabad (Pakistan)
2014-10-15
Partial oxidation of methanol is the only self-sustaining process for onboard production of hydrogen. For this a fixed bed catalytic reactor is designed, based on heterogeneous catalytic reaction. To develop an optimized process, simulation is carried out using ASPEN HYSYS v 7.1. Reaction kinetics is developed on the basis of Langmuir Hinshel wood model. 45:55:5 of CuO: ZnO: Al/sub 2/O/sub 3/ is used as a catalyst. Simulation results are studied in detail to understand the phenomenon of partial oxidation of methanol inside the reactor. An experimental rig is developed for hydrogen production through partial oxidation of methanol. Results obtained from process simulation and experimental work; are compared with each other. (author)
Ebert, Marcelo R
2018-01-01
This book provides an overview of different topics related to the theory of partial differential equations. Selected exercises are included at the end of each chapter to prepare readers for the “research project for beginners” proposed at the end of the book. It is a valuable resource for advanced graduates and undergraduate students who are interested in specializing in this area. The book is organized in five parts: In Part 1 the authors review the basics and the mathematical prerequisites, presenting two of the most fundamental results in the theory of partial differential equations: the Cauchy-Kovalevskaja theorem and Holmgren's uniqueness theorem in its classical and abstract form. It also introduces the method of characteristics in detail and applies this method to the study of Burger's equation. Part 2 focuses on qualitative properties of solutions to basic partial differential equations, explaining the usual properties of solutions to elliptic, parabolic and hyperbolic equations for the archetypes...
International Nuclear Information System (INIS)
Tovey, S.N.; Hansen, J.D.; Paler, K.; Shah, T.P.; Borg, A.; Denegri, D.; Pons, Y.; Spiro, M.
1975-01-01
The reactions K - p→K - π + π - and K - p→ antikaon-neutral π - π 0 p at 14.3GeV/c has been studied using respectively 15992 and 3723 events. Partial wave analysis of the region 1.0 + but that the partial wave substrates have very different branching ratios into (rho) and K*π, the K*π component of the 1 + state being similar to the 1 + state of the 3π system produced in the reaction πp→(3π)p [fr
DEFF Research Database (Denmark)
Kjølby, Birgitte Fuglsang; Mikkelsen, Irene Klærke; Pedersen, Michael
2009-01-01
of an AIF voxel including the relaxation properties of blood and tissue. Artery orientations parallel and perpendicular to the main magnetic field were investigated and AIF voxels were modeled to either include or be situated close to a large artery. The impact of partial volume effects on quantitative...... perfusion metrics was investigated for the gradient echo pulse sequence at 1.5 T and 3.0 T. It is shown that the tissue contribution broadens and introduces fluctuations in the AIF. Furthermore, partial volume effects bias perfusion metrics in a nonlinear fashion, compromising quantitative perfusion...
Chen, Shuming; Wang, Dengfeng; Liu, Bo
This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.
Directory of Open Access Journals (Sweden)
S. Prabhu
2014-06-01
Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.
Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis
Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi
A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.
Directory of Open Access Journals (Sweden)
S. Mahmoudishadi
2017-09-01
Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Mahmoudishadi, S.; Malian, A.; Hosseinali, F.
2017-09-01
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Polder, G.; Heijden, van der G.W.A.M.
2003-01-01
Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components
A Note on McDonald's Generalization of Principal Components Analysis
Shine, Lester C., II
1972-01-01
It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…
Mellard, Daryl F.; Anthony, Jason L.; Woods, Kari L.
2012-01-01
This study extends the literature on the component skills involved in oral reading fluency. Dominance analysis was applied to assess the relative importance of seven reading-related component skills in the prediction of the oral reading fluency of 272 adult literacy learners. The best predictors of oral reading fluency when text difficulty was…
Directory of Open Access Journals (Sweden)
Anna Maria Stellacci
2012-07-01
Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined
Rajagopal, K. R.
1992-01-01
The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.
International Nuclear Information System (INIS)
Hicks, D.L.; Walsh, R.T.
1976-06-01
Discrete methods for the solution of the partial differential equations arising in hydrocodes and wavecodes are presented in a tutorial fashion. By discrete methods is meant, for example, the methods of finite differences, finite elements, discretized characteristics, etc. The concepts of stability, consistency, convergence, order of accuracy, true accuracy, etc., and their relevance to the hydrocodes and wavecodes are discussed
Gomez, Rapson
2012-01-01
Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…
Directory of Open Access Journals (Sweden)
Korobeynikov S.M.
2017-08-01
Full Text Available In this paper, we consider the problems related to measuring and analyzing the characteristics of partial discharges which are the main instrument for oil-filled high-voltage electrical equipment diagnosing. The experiments on recording of partial discharges in transformer oil have been carried out in the “point-plane” electrode system at alternating current. The instantaneous voltage and the apparent charge have been measured depending on the root-mean-square voltage and the phase angle of partial discharges. This paper aimes at carrying out a statistical analysis of the obtained experimental results, in particular, the construction of a parametric probabilistic model of the dependence of the partial discharge inception voltage distribution on the value of the root-mean-square voltage. It differs from usual discharges which occur in liquid dielectric materials in case of sharp inhomogeneous electrode system. It has been suggested that discharges of a different type are the discharges in gas bubbles that occur when partial discharges in a liquid emerge. This assumption is confirmed by the fact that the number of such discharges increases with increasing the root-mean-square voltage value. It is the main novelty of this paper. This corresponds to the nature of the occurrence of such discharges. After rejecting the observations corresponding to discharges in gas bubbles, a parametric probabilistic model has been constructed. The model obtained makes it possible to determine the probability of partial discharge occurrence in a liquid at a given value of the instantaneous voltage depending on the root-mean-square voltage.
International Nuclear Information System (INIS)
Basta, C.; Olive, W.J.; Antunes, J.S.
1990-01-01
An analysis of cost for each components of Small Hydroelectric Power Plant, taking into account the real costs of these projects is shown. It also presents a global equation which allows a preliminary estimation of cost for each construction. (author)
DEFF Research Database (Denmark)
Malmquist, Linus M.V.; Olsen, Rasmus R.; Hansen, Asger B.
2007-01-01
weathering state and to distinguish between various weathering processes is investigated and discussed. The method is based on comprehensive and objective chromatographic data processing followed by principal component analysis (PCA) of concatenated sections of gas chromatography–mass spectrometry...
Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image
National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...
International Nuclear Information System (INIS)
Jesse, Stephen; Kalinin, Sergei V
2009-01-01
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
International Nuclear Information System (INIS)
Nigran, K.S.; Barber, D.C.
1985-01-01
A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)
Directory of Open Access Journals (Sweden)
Ida Vajčnerová
2016-01-01
Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2014-01-01
The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.
Tripathy, Manoj
2012-01-01
This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...
Sensitivity analysis on the component cooling system of the Angra 1 NPP
International Nuclear Information System (INIS)
Castro Silva, Luiz Euripedes Massiere de
1995-01-01
The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs
Geroukis, Asterios; Brorson, Erik
2014-01-01
In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio – using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...
Root cause analysis in support of reliability enhancement of engineering components
International Nuclear Information System (INIS)
Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.
2014-01-01
Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)
Time-domain ultra-wideband radar, sensor and components theory, analysis and design
Nguyen, Cam
2014-01-01
This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...
Development of computational methods of design by analysis for pressure vessel components
International Nuclear Information System (INIS)
Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin
2005-01-01
Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)
Directory of Open Access Journals (Sweden)
Glogovac Svetlana
2012-01-01
Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.
The role of damage analysis in the assessment of service-exposed components
International Nuclear Information System (INIS)
Bendick, W.; Muesch, H.; Weber, H.
1987-01-01
Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)
Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.
Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M
2016-10-03
Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.
Reliability analysis and component functional allocations for the ESF multi-loop controller design
International Nuclear Information System (INIS)
Hur, Seop; Kim, D.H.; Choi, J.K.; Park, J.C.; Seong, S.H.; Lee, D.Y.
2006-01-01
This paper deals with the reliability analysis and component functional allocations to ensure the enhanced system reliability and availability. In the Engineered Safety Features, functionally dependent components are controlled by a multi-loop controller. The system reliability of the Engineered Safety Features-Component Control System, especially, the multi-loop controller which is changed comparing to the conventional controllers is an important factor for the Probability Safety Assessment in the nuclear field. To evaluate the multi-loop controller's failure rate of the k-out-of-m redundant system, the binomial process is used. In addition, the component functional allocation is performed to tolerate a single multi-loop controller failure without the loss of vital operation within the constraints of the piping and component configuration, and ensure that mechanically redundant components remain functional. (author)
Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence
Directory of Open Access Journals (Sweden)
Liying Wang
2017-01-01
Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.
Partial-wave analysis of π-π0π0 events at 18 GeV/c
International Nuclear Information System (INIS)
Brown, D.S.
1998-01-01
A partial-wave analysis has been performed on 170 K π - π 0 π 0 events produced in the reaction π - p→pπ - π 0 π 0 , and the results of the mass-independent fits are presented. The objective was to confirm the existence of the π(1800) and the exotic J PC =1 -+ object, reported by VES. copyright 1998 American Institute of Physics
Directory of Open Access Journals (Sweden)
Dumitrescu Sorin
2016-08-01
Full Text Available The paper shows the importance of trending of partial discharge activity in assessing the insulation condition. It is presented the principle of the measurement method and the quantities that characterize partial discharges and also the criteria utilized for the assessement of the insulation condition of the hydrogenerators. Results of the measurements made on several hydrogenerators are presented, like the variation with time of the two main quantities that characterize the partial discharges, maximum magnitude, Qm and the normalized quantity, NQN over a period of about 10 years. Further, a classification of the insulation condition by 3 main and 2 intermediary categories and the definition of these categories are given. The criteria used for the assessment of the insulation condition are presented in the form of a table: quantitative criteria by the ± NQN and ± Qm values and qualitative criteria for the analysis of the 2D and 3D diagrams. At the end of each set of measurements, an analyze of the insulation condition annual evaluation is made, also a verdict is put, and of course, the recommendations made relating to the maintenance and the decisions that have been taken. The paper ends with several considerations on the method of on-line partial discharges and especially, on the conditions for valid trending activity in time.
Failure trend analysis for safety related components of Korean standard NPPs
International Nuclear Information System (INIS)
Choi, Sun Yeong; Han, Sang Hoon
2005-01-01
The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc
SNIa detection in the SNLS photometric analysis using Morphological Component Analysis
Energy Technology Data Exchange (ETDEWEB)
Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.; Palanque-Delabrouille, N. [Irfu, SPP, CEA Saclay, F-91191 Gif sur Yvette cedex (France); Lanusse, F.; Starck, J.-L., E-mail: anais.moller@cea.fr, E-mail: vanina.ruhlmann-kleider@cea.fr, E-mail: francois.lanusse@cea.fr, E-mail: jeremy.neveu@cea.fr, E-mail: nathalie.palanque-delabrouille@cea.fr, E-mail: jstarck@cea.fr [Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp, CEA Saclay, F-91191 Gif sur Yvette cedex (France)
2015-04-01
Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.
International Nuclear Information System (INIS)
Lee, Jun Shin; Lee, Wook Ryun; Oh, Ki Yong; Kim, Bong Ki
2010-01-01
Understanding water hammer is very important to the prevention of excessive pressure build-up in pipelines. Many researchers have studied this phenomenon, drawing effective solutions through the time- and frequency-domain approaches. For the purposes of enhancing the advantages of the frequency-domain approach and, thereby, rendering investigations of the dynamic characteristics of pipelines more effective, we propose partial fraction expansion of the transfer function between the unsteady flow source and a given section. We simulate the proposed approach using a vibration element inserted into a simple pipeline, deducing much useful physical information pertaining to pipeline design. We conclude that locating the resonance of the vibration element between the first and second resonances of the pipeline can mitigate the excessive pressure build-up attendant on the occurrence of water hammer. Our method of partial fraction expansion is expected to be useful and effective in analyses of unsteady flows in pipelines
Assessment of drinking water quality using principal component ...
African Journals Online (AJOL)
Assessment of drinking water quality using principal component analysis and partial least square discriminant analysis: a case study at water treatment plants, ... water and to detect the source of pollution for the most revealing parameters.
Pan, Suwen; Fadiga, Mohamadou L.; Mohanty, Samarendu; Welch, Mark
2006-01-01
This paper analyzed the effects of trade liberalizing reforms in the world cotton market using a partial equilibrium model. The simulation results indicated that a removal of domestic subsidies and border tariffs for cotton would increase the amount of world cotton trade by an average of 4% in the next five years and world cotton prices by an average of 12% over the same time horizon. The findings indicated that under the liberalization policy, the United States would lose part of its export ...
Group-wise partial least square regression
Camacho, José; Saccenti, Edoardo
2018-01-01
This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are
Analysis of Insulating Material of XLPE Cables considering Innovative Patterns of Partial Discharges
Directory of Open Access Journals (Sweden)
Fernando Figueroa Godoy
2017-01-01
Full Text Available This paper aims to analyze the quality of insulation in high voltage underground cables XLPE using a prototype which classifies the following usual types of patterns of partial discharge (PD: (1 internal PD, (2 superficial PD, (3 corona discharge in air, and (4 corona discharge in oil, in addition to considering two new PD patterns: (1 false contact and (2 floating ground. The tests and measurements to obtain the patterns and study cases of partial discharges were performed at the Testing Laboratory Equipment and Materials (LEPEM of the Federal Electricity Commission of Mexico (CFE using a measuring equipment LDIC and norm IEC60270. To classify the six patterns of partial discharges mentioned above a Probabilistic Neural Network Bayesian Modified (PNNBM method having the feature of using a large amount of data will be used and it is not saturated. In addition, PNN converges, always finding a solution in a short period of time with low computational cost. The insulation of two high voltage cables with different characteristics was analyzed. The test results allow us to conclude which wire has better insulation.
Singh, S; Gupta, R
2012-06-01
To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.
Directory of Open Access Journals (Sweden)
Han-Jui Lee
Full Text Available Current time-density curve analysis of digital subtraction angiography (DSA provides intravascular flow information but requires manual vasculature selection. We developed an angiographic marker that represents cerebral perfusion by using automatic independent component analysis.We retrospectively analyzed the data of 44 patients with unilateral carotid stenosis higher than 70% according to North American Symptomatic Carotid Endarterectomy Trial criteria. For all patients, magnetic resonance perfusion (MRP was performed one day before DSA. Fixed contrast injection protocols and DSA acquisition parameters were used before stenting. The cerebral circulation time (CCT was defined as the difference in the time to peak between the parietal vein and cavernous internal carotid artery in a lateral angiogram. Both anterior-posterior and lateral DSA views were processed using independent component analysis, and the capillary angiogram was extracted automatically. The full width at half maximum of the time-density curve in the capillary phase in the anterior-posterior and lateral DSA views was defined as the angiographic mean transient time (aMTT; i.e., aMTTAP and aMTTLat. The correlations between the degree of stenosis, CCT, aMTTAP and aMTTLat, and MRP parameters were evaluated.The degree of stenosis showed no correlation with CCT, aMTTAP, aMTTLat, or any MRP parameter. CCT showed a strong correlation with aMTTAP (r = 0.67 and aMTTLat (r = 0.72. Among the MRP parameters, CCT showed only a moderate correlation with MTT (r = 0.67 and Tmax (r = 0.40. aMTTAP showed a moderate correlation with Tmax (r = 0.42 and a strong correlation with MTT (r = 0.77. aMTTLat also showed similar correlations with Tmax (r = 0.59 and MTT (r = 0.73.Apart from vascular anatomy, aMTT estimates brain parenchyma hemodynamics from DSA and is concordant with MRP. This process is completely automatic and provides immediate measurement of quantitative peritherapeutic brain parenchyma
International Nuclear Information System (INIS)
Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun
2011-01-01
This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)
Reliability analysis of nuclear component cooling water system using semi-Markov process model
International Nuclear Information System (INIS)
Veeramany, Arun; Pandey, Mahesh D.
2011-01-01
Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.
Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei
2017-09-11
Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.
Advanced BWR core component designs and the implications for SFD analysis
International Nuclear Information System (INIS)
Ott, L.J.
1997-01-01
Prior to the DF-4 boiling water reactor (BWR) severe fuel damage (SFD) experiment conducted at the Sandia National Laboratories in 1986, no experimental data base existed for guidance in modeling core component behavior under postulated severe accident conditions in commercial BWRs. This paper will present the lessons learned from the DF-4 experiment (and subsequent German CORA BWR SFD tests) and the impact on core models in the current generation of SFD codes. The DF-4 and CORA BWR test assemblies were modeled on the core component designs circa 1985; that is, the 8 x 8 fuel assembly with two water rods and a cruciform control blade constructed of B 4 C-filled tubelets. Within the past ten years, the state-of-the-art with respect to BWR core component development has out-distanced the current SFD experimental data base and SFD code capabilities. For example, modern BWR control blade design includes hafnium at the tips and top of each control blade wing for longer blade operating lifetimes; also water rods have been replaced by larger water channels for better neutronics economy; and fuel assemblies now contain partial-length fuel rods, again for better neutronics economy. This paper will also discuss the implications of these advanced fuel assembly and core component designs on severe accident progression and on the current SFD code capabilities
MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET
International Nuclear Information System (INIS)
Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.
2012-01-01
We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.
Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation
Directory of Open Access Journals (Sweden)
Chunbo Yang
2015-01-01
Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.
Directory of Open Access Journals (Sweden)
Peter Glen
Full Text Available There is no consensus as to what extent of "wrap" is required in a fundoplication for correction of gastroesophageal reflux disease (GERD.To evaluate if a complete (360 degree or partial fundoplication gives better control of GERD.A systematic search of MEDLINE and Scopus identified interventional and observational studies of fundoplication in children. Screening identified those comparing techniques. The primary outcome was recurrence of GERD following surgery. Dysphagia and complications were secondary outcomes of interest. Meta-analysis was performed when appropriate. Study quality was assessed using the Cochrane Risk of Bias Tool.2289 abstracts were screened, yielding 2 randomized controlled trials (RCTs and 12 retrospective cohort studies. The RCTs were pooled. There was no difference in surgical success between partial and complete fundoplication, OR 1.33 [0.67,2.66]. In the 12 cohort studies, 3 (25% used an objective assessment of the surgery, one of which showed improved outcomes with complete fundoplication. Twenty-five different complications were reported; common were dysphagia and gas-bloat syndrome. Overall study quality was poor.The comparison of partial fundoplication with complete fundoplication warrants further study. The evidence does not demonstrate superiority of one technique. The lack of high quality RCTs and the methodological heterogeneity of observational studies limits a powerful meta-analysis.
Directory of Open Access Journals (Sweden)
Yung-Kun Chuang
2014-09-01
Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.
The derivative assay--an analysis of two fast components of DNA rejoining kinetics
International Nuclear Information System (INIS)
Sandstroem, B.E.
1989-01-01
The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively
DEFF Research Database (Denmark)
Tian, Fang; Rades, Thomas; Sandler, Niklas
2008-01-01
The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...
Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.
2017-07-01
A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.
A review of the reliability analysis of LPRS including the components repairs
International Nuclear Information System (INIS)
Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.
1983-01-01
The reliability analysis of low pressure recirculation system in its long-term recicurlation phase before 24hs is presented. The possibility of repairing the components out of the containment is included. A general revision of analysis of the short-term recirculation phase is done. (author) [pt
Analysis of chromosomal abnormalities: a study of partial exposure to X-rays
International Nuclear Information System (INIS)
Andrade, Aida M.G. de; Mendes, Mariana E.; Mendonça, Julyanne C.G.; Melo, Laís; Hwang, Suy; Santos, Neide; Lima, Fabiana F. de; Centro Regional de Ciências Nucleares; Universidade Federal de Pernambuco
2017-01-01
Biological dosimetry is used in case of supposed accidental overexposure. The most commonly used biomarkers for assessing the absorbed dose are unstable chromosomal abnormalities. In a case of a partial body exposure, the frequencies of those abnormalities varies according to the area of the exposed body and may be substantially different from a total exposure of the body with an identical dose. The present study aimed to evaluate the frequency of chromosomal changes simulating, with blood samples, partial (25%, 50%) and full body irradiation (100%) in X-ray beam. The irradiation was performed at Metrology Service (CRCN-NE / CNEN) with a bundle of 250kVp X-rays, resulting in the absorbed dose of 1.0 Gy. Prior to obtain the metaphases, irradiated blood was mixed with non-irradiated blood, and then the mitotic metaphases for the chromosomal analyzes were obtained by culturing lymphocytes and the slides were stained with 5% Giemsa. It was observed that there was an increase in dicentric frequency when the dose percentage increases in both subjects (0.024 and 0.049 in subject 1 and 0.016 and 0.038 in subject 2) after irradiation. The cellular distribution was 'contaminated' only at dose 25% of the first individual who had a prolongation of the distribution. The Qdr and Dolphin methods were used to estimate partial absorbed dose, but the Qdr method was not efficient and whereas the Dolphin method was efficient when the individual had a prolonged cell distribution. It is necessary to increase the number of observations to be sure of the observed behaviors. (author)
Analysis of chromosomal abnormalities: a study of partial exposure to X-rays
Energy Technology Data Exchange (ETDEWEB)
Andrade, Aida M.G. de; Mendes, Mariana E.; Mendonça, Julyanne C.G.; Melo, Laís; Hwang, Suy; Santos, Neide; Lima, Fabiana F. de, E-mail: aidamgandrade@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife (Brazil); Centro Regional de Ciências Nucleares (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Universidade Federal de Pernambuco (UFPE),Recife (Brazil). Centro de Biociências. Departamento de Genética
2017-11-01
Biological dosimetry is used in case of supposed accidental overexposure. The most commonly used biomarkers for assessing the absorbed dose are unstable chromosomal abnormalities. In a case of a partial body exposure, the frequencies of those abnormalities varies according to the area of the exposed body and may be substantially different from a total exposure of the body with an identical dose. The present study aimed to evaluate the frequency of chromosomal changes simulating, with blood samples, partial (25%, 50%) and full body irradiation (100%) in X-ray beam. The irradiation was performed at Metrology Service (CRCN-NE / CNEN) with a bundle of 250kVp X-rays, resulting in the absorbed dose of 1.0 Gy. Prior to obtain the metaphases, irradiated blood was mixed with non-irradiated blood, and then the mitotic metaphases for the chromosomal analyzes were obtained by culturing lymphocytes and the slides were stained with 5% Giemsa. It was observed that there was an increase in dicentric frequency when the dose percentage increases in both subjects (0.024 and 0.049 in subject 1 and 0.016 and 0.038 in subject 2) after irradiation. The cellular distribution was 'contaminated' only at dose 25% of the first individual who had a prolongation of the distribution. The Qdr and Dolphin methods were used to estimate partial absorbed dose, but the Qdr method was not efficient and whereas the Dolphin method was efficient when the individual had a prolonged cell distribution. It is necessary to increase the number of observations to be sure of the observed behaviors. (author)
International Nuclear Information System (INIS)
Tan Yiqing; Wang Yase; Dai Hongxiu; Li Haitao; Deng Yi; Xiong Liqin
2011-01-01
Objective: To evaluate selective salpingo-catheterization recanalization therapy in treating partial fallopian tube obstruction through comparing its clinical effectiveness with that of non-interventional radiology methods. Methods: During the period from January 2008 to October 2010, a total of 186 infertility women with partial fallopian tube obstruction, which was confirmed with hysterosalpingography, were encountered in authors' hospital. This study protocol was approved by our hospital ethics committee, and informed consent was obtained from all patients. According of different treatment methods, 186 patients were divided into two groups. Patients (n=78) in group A received non-interventional radiology methods, including hydrotubation, enema, laparoscopy, physical therapy and traditional Chinese medication, while patients (n=108) in group B received selective salpingo-catheterization recanalization therapy. All 186 patients were followed up for more than six months. Close observation on the pregnancy incidence after different treatments and fallopian tube patency was carried out. The clinical findings were documented. The results were statistically analyzed by using paired 'x2' test. Results: Half a year after different procedures, in group A the pregnancy rate was 12.82% (n=10), and different degrees of fallopian tube obstruction were found in 31.58% patients. Whereas in group B, during the same period of observation, the pregnancy rate was 58.33% (n=63), and partial occlusion in cornual region was seen in one patient (0.47%). Significant difference in both pregnancy rate and fallopian tube occlusion rate existed between two groups (P<0.05). Conclusion: Because of its minimal invasiveness, high effectiveness and safety, the selective salpingography together with fallopian tube recanalization procedures are well accepted by the patients. The selective salpingo-catheterization recanalization therapy is superior to non-interventional radiology methods in
Priority of VHS Development Based in Potential Area using Principal Component Analysis
Meirawan, D.; Ana, A.; Saripudin, S.
2018-02-01
The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Zirari, M.; Abdellah El-Hadj, A.; Bacha, N.
2010-03-01
A finite element method is used to simulate the deposition of the thermal spray coating process. A set of governing equations is solving by a volume of fluid method. For the solidification phenomenon, we use the specific heat method (SHM). We begin by comparing the present model with experimental and numerical model available in the literature. In this study, completely molten or semi-molten aluminum particle impacts a H13 tool steel substrate is considered. Next we investigate the effect of inclination of impact of a partially molten particle on flat substrate. It was found that the melting state of the particle has great effects on the morphologies of the splat.
Water pollution and income relationships: A seemingly unrelated partially linear analysis
Pandit, Mahesh; Paudel, Krishna P.
2016-10-01
We used a seemingly unrelated partially linear model (SUPLM) to address a potential correlation between pollutants (nitrogen, phosphorous, dissolved oxygen and mercury) in an environmental Kuznets curve study. Simulation studies show that the SUPLM performs well to address potential correlation among pollutants. We find that the relationship between income and pollution follows an inverted U-shaped curve for nitrogen and dissolved oxygen and a cubic shaped curve for mercury. Model specification tests suggest that a SUPLM is better specified compared to a parametric model to study the income-pollution relationship. Results suggest a need to continually assess policy effectiveness of pollution reduction as income increases.
Directory of Open Access Journals (Sweden)
Bilčík Matúš
2018-03-01
Full Text Available Due to expansion of utilisation of photovoltaics in ordinary households, the question arises how this phenomenon affects the electric power of photovoltaic modules. The article deals with the electric power analysis of photovoltaic modules as a function of two very important factors. The first examined factor was partial shading, and the second factor was the intensity of reflected radiation. In order to determine the dependence of module power on the aforementioned parameters, a measurement system under laboratory conditions has been prepared. For identification of the reflected radiation effect on the power of the photovoltaic module, a series of measurements was performed on 7 different surfaces with the same radiation source. It is evident from obtained experimental result that the ratio of reflected irradiation on the solar module power is 1.29%. By simulation of partial shading of photovoltaic module, the decrease of 86.15% in its output power was identified.
Health status monitoring for ICU patients based on locally weighted principal component analysis.
Ding, Yangyang; Ma, Xin; Wang, Youqing
2018-03-01
Intelligent status monitoring for critically ill patients can help medical stuff quickly discover and assess the changes of disease and then make appropriate treatment strategy. However, general-type monitoring model now widely used is difficult to adapt the changes of intensive care unit (ICU) patients' status due to its fixed pattern, and a more robust, efficient and fast monitoring model should be developed to the individual. A data-driven learning approach combining locally weighted projection regression (LWPR) and principal component analysis (PCA) is firstly proposed and applied to monitor the nonlinear process of patients' health status in ICU. LWPR is used to approximate the complex nonlinear process with local linear models, in which PCA could be further applied to status monitoring, and finally a global weighted statistic will be acquired for detecting the possible abnormalities. Moreover, some improved versions are developed, such as LWPR-MPCA and LWPR-JPCA, which also have superior performance. Eighteen subjects were selected from the Physiobank's Multi-parameter Intelligent Monitoring for Intensive Care II (MIMIC II) database, and two vital signs of each subject were chosen for online monitoring. The proposed method was compared with several existing methods including traditional PCA, Partial least squares (PLS), just in time learning combined with modified PCA (L-PCA), and Kernel PCA (KPCA). The experimental results demonstrated that the mean fault detection rate (FDR) of PCA can be improved by 41.7% after adding LWPR. The mean FDR of LWPR-MPCA was increased by 8.3%, compared with the latest reported method L-PCA. Meanwhile, LWPR spent less training time than others, especially KPCA. LWPR is first introduced into ICU patients monitoring and achieves the best monitoring performance including adaptability to changes in patient status, sensitivity for abnormality detection as well as its fast learning speed and low computational complexity. The algorithm
Yang, Yan-Qin; Yin, Hong-Xu; Yuan, Hai-Bo; Jiang, Yong-Wen; Dong, Chun-Wang; Deng, Yu-Liang
2018-01-01
In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME) followed by gas chromatography-mass spectrometry (GC-MS) was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA) and hierarchical clustering analysis (HCA). Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP) values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA) allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties.
Directory of Open Access Journals (Sweden)
Yan-Qin Yang
Full Text Available In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME followed by gas chromatography-mass spectrometry (GC-MS was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA and hierarchical clustering analysis (HCA. Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties.
Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis
Directory of Open Access Journals (Sweden)
Xiaoming Xu
2017-01-01
Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.