Snaterse, Marjolein; Dobber, Jos; Jepma, Patricia; Peters, Ron J. G.; ter Riet, Gerben; Boekholdt, S. Matthijs; Buurman, Bianca M.; Scholte op Reimer, Wilma J. M.
2016-01-01
Current guidelines on secondary prevention of cardiovascular disease recommend nurse-coordinated care (NCC) as an effective intervention. However, NCC programmes differ widely and the efficacy of NCC components has not been studied. To investigate the efficacy of NCC and its components in secondary
Backhouse, Amy; Ukoumunne, Obioha C; Richards, David A; McCabe, Rose; Watkins, Ross; Dickens, Chris
2017-11-13
Interventions aiming to coordinate services for the community-based dementia population vary in components, organisation and implementation. In this review we aimed to evaluate the effectiveness of community-based care coordinating interventions on health outcomes and investigate whether specific components of interventions influence their effects. We searched four databases from inception to April 2017: Medline, The Cochrane Library, EMBASE and PsycINFO. This was aided by a search of four grey literature databases, and backward and forward citation tracking of included papers. Title and abstract screening was followed by a full text screen by two independent reviewers, and quality was assessed using the CASP appraisal tool. We then conducted meta-analyses and subgroup analyses. A total of 14 randomised controlled trials (RCTs) involving 10,372 participants were included in the review. Altogether we carried out 12 meta-analyses and 19 subgroup analyses. Meta-analyses found coordinating interventions showed a statistically significant improvement in both patient behaviour measured using the Neuropsychiatric Inventory (NPI) (mean difference (MD) = -9.5; 95% confidence interval (CI): -18.1 to -1.0; p = 0.03; number of studies (n) = 4; I 2 = 88%) and caregiver burden (standardised mean difference (SMD) = -0.54; 95% CI: -1.01 to -0.07; p = 0.02; n = 5, I 2 = 92%) compared to the control group. Subgroup analyses found interventions using a case manager with a nursing background showed a greater positive effect on caregiver quality of life than those that used case managers from other professional backgrounds (SMD = 0.94 versus 0.03, respectively; p < 0.001). Interventions that did not provide supervision for the case managers showed greater effectiveness for reducing the percentage of patients that are institutionalised compared to those that provided supervision (odds ratio (OR) = 0.27 versus 0.96 respectively; p = 0.02). There was little
Amy Backhouse
2017-11-01
Full Text Available Abstract Background Interventions aiming to coordinate services for the community-based dementia population vary in components, organisation and implementation. In this review we aimed to evaluate the effectiveness of community-based care coordinating interventions on health outcomes and investigate whether specific components of interventions influence their effects. Methods We searched four databases from inception to April 2017: Medline, The Cochrane Library, EMBASE and PsycINFO. This was aided by a search of four grey literature databases, and backward and forward citation tracking of included papers. Title and abstract screening was followed by a full text screen by two independent reviewers, and quality was assessed using the CASP appraisal tool. We then conducted meta-analyses and subgroup analyses. Results A total of 14 randomised controlled trials (RCTs involving 10,372 participants were included in the review. Altogether we carried out 12 meta-analyses and 19 subgroup analyses. Meta-analyses found coordinating interventions showed a statistically significant improvement in both patient behaviour measured using the Neuropsychiatric Inventory (NPI (mean difference (MD = −9.5; 95% confidence interval (CI: −18.1 to −1.0; p = 0.03; number of studies (n = 4; I2 = 88% and caregiver burden (standardised mean difference (SMD = −0.54; 95% CI: -1.01 to −0.07; p = 0.02; n = 5, I2 = 92% compared to the control group. Subgroup analyses found interventions using a case manager with a nursing background showed a greater positive effect on caregiver quality of life than those that used case managers from other professional backgrounds (SMD = 0.94 versus 0.03, respectively; p < 0.001. Interventions that did not provide supervision for the case managers showed greater effectiveness for reducing the percentage of patients that are institutionalised compared to those that provided supervision (odds ratio (OR = 0.27 versus 0
Towards Cognitive Component Analysis
Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan
2005-01-01
Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...
Multiscale principal component analysis
Akinduko, A A; Gorban, A N
2014-01-01
Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis
Coordinated Analyses of Diverse Components in Whole Stardust Cometary Tracks
Nakamura-Messenger, K.; Keller, L. P.; Messenger, S. R.; Clemett, S. J.; Nguyen, L. N.; Frank, D.
2011-12-01
Analyses of samples returned from Comet 81P/Wild-2 by the Stardust spacecraft have resulted in a number of surprising findings that show the origins of comets are more complex than previously suspected. However, these samples pose new experimental challenges because they are diverse and suffered fragmentation, thermal alteration, and fine scale mixing with aerogel. Questions remain about the nature of Wild-2 materials, such as the abundances of organic matter, crystalline materials, and presolar grains. To overcome these challenges, we have developed new sample preparation and analytical techniques tailored for entire aerogel tracks [Nakamura-Messenger et al. 2011]. We have successfully ultramicrotomed entire "carrot" and "bulbous" type tracks along their axis while preserving their original shapes. This innovation allowed us to examine the distribution of fragments along the track from the entrance hole all the way to the terminal particle (TP). We will present results of our coordinated analysis of the "carrot" type aerogel tracks #112 and #148, and the "bulbous" type aerogel tracks #113, #147 and #168 from the nanometer to the millimeter scale. Scanning TEM (STEM) was used for elemental and detailed mineralogy characterization, NanoSIMS was used for isotopic analyses, and ultrafast two-step laser mass spectrometry (ultra L2MS) was used to investigate the nature and distribution of organic phases. The isotopic measurements were performed following detailed TEM characterization for coordinated mineralogy. This approach also enabled spatially resolving the target sample from fine-scale mixtures of compressed aerogel and melt. Eight of the TPs of track #113 are dominated by coarse-grained enstatite (En90) that is largely orthoenstatite with minor, isolated clinoenstatite lamellae. One TP contains minor forsterite (Fo88) and small inclusions of diopside with % levels of Al, Cr and Fe. Two of the TPs contain angular regions of fine-grained nepheline surrounded by
Feng, Ling
2008-01-01
This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...
Euler principal component analysis
Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,
Bayesian Independent Component Analysis
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Integrating the Joint Force: Improving Coordination Among The Component Commanders
Krogman, Kenneth
2003-01-01
.... By examining one aspect of joint fire support, the Fire Support Coordination Line (FSCL), the operational level implications of doctrine, and implications regarding horizontal integration and coordination become clear...
Independent component analysis: recent advances
Hyv?rinen, Aapo
2013-01-01
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...
LIM, M.; PARK, Y.; Jung, H.; SHIN, Y.; Rim, H.; PARK, C.
2017-12-01
To measure all components of a physical property, for example the magnetic field, is more useful than to measure its magnitude only in interpretation and application thereafter. To convert the physical property measured in 3 components on a random coordinate system, for example on moving magnetic sensor body's coordinate system, into 3 components on a fixed coordinate system, for example on geographical coordinate system, by the rotations of coordinate system around Euler angles for example, we should have the attitude values of the sensor body in time series, which could be acquired by an INS-GNSS system of which the axes are installed coincident with those of the sensor body. But if we want to install some magnetic sensors in array at sea floor but without attitude acquisition facility of the magnetic sensors and to monitor the variation of magnetic fields in time, we should have also some way to estimate the relation between the geographical coordinate system and each sensor body's coordinate system by comparison of the vectors only measured on both coordinate systems on the assumption that the directions of the measured magnetic field on both coordinate systems are the same. For that estimation, we have at least 3 ways. The first one is to calculate 3 Euler angles phi, theta, psi from the equation Vgeograph = Rx(phi) Ry(theta) Rz(psi) Vrandom, where Vgeograph is the vector on geographical coordinate system etc. and Rx(phi) is the rotation matrix around the x axis by the angle phi etc. The second one is to calculate the difference of inclination and declination between the 2 vectors on spherical coordinate system. The third one, used by us for this study, is to calculate the angle of rotation along a great circle around the rotation axis, and the direction of the rotation axis. We installed no. 1 and no. 2 FVM-400 fluxgate magnetometers in array near Cheongyang Geomagnetic Observatory (IAGA code CYG) and acquired time series of magnetic fields for CYG and for
Shifted Independent Component Analysis
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2007-01-01
Delayed mixing is a problem of theoretical interest and practical importance, e.g., in speech processing, bio-medical signal analysis and financial data modelling. Most previous analyses have been based on models with integer shifts, i.e., shifts by a number of samples, and have often been carried...
Anatomic breast coordinate system for mammogram analysis
Karemore, Gopal; Brandt, S.; Karssemeijer, N.
2011-01-01
was represented by geodesic distance (s) from nipple and parametric angle (¿) as shown in figure 1. The scoring technique called MTR (mammographic texture resemblance marker) used this breast coordinate system to extract Gaussian derivative features. The features extracted using the (x,y) and the curve......Purpose Many researchers have investigated measures also other than density in the mammogram such as measures based on texture to improve breast cancer risk assessment. However, parenchymal texture characteristics are highly dependent on the orientation of vasculature structure and fibrous tissue...... methodologies as seen from table 2 in given temporal study. Conclusion The curve-linear anatomical breast coordinate system facilitated computerized analysis of mammograms. The proposed coordinate system slightly improved the risk segregation by Mammographic Texture Resemblance and minimized the geometrical...
Multiview Bayesian Correlated Component Analysis
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
EXAFS and principal component analysis : a new shell game
Wasserman, S.
1998-01-01
The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
Feng, Lei; Zhang, Yugui
2017-08-01
Dispersion analysis is an important part of in-seam seismic data processing, and the calculation accuracy of the dispersion curve directly influences pickup errors of channel wave travel time. To extract an accurate channel wave dispersion curve from in-seam seismic two-component signals, we proposed a time-frequency analysis method based on single-trace signal processing; in addition, we formulated a dispersion calculation equation, based on S-transform, with a freely adjusted filter window width. To unify the azimuth of seismic wave propagation received by a two-component geophone, the original in-seam seismic data undergoes coordinate rotation. The rotation angle can be calculated based on P-wave characteristics, with high energy in the wave propagation direction and weak energy in the vertical direction. With this angle acquisition, a two-component signal can be converted to horizontal and vertical directions. Because Love channel waves have a particle vibration track perpendicular to the wave propagation direction, the signal in the horizontal and vertical directions is mainly Love channel waves. More accurate dispersion characters of Love channel waves can be extracted after the coordinate rotation of two-component signals.
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
On Bayesian Principal Component Analysis
Šmídl, Václav; Quinn, A.
2007-01-01
Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a
Structural analysis of nuclear components
Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.
1983-05-01
THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
Model reduction by weighted Component Cost Analysis
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
C2 Network Analysis: Insights into Coordination & Understanding
Hansberger, Jeffrey T; Schreiber, Craig; Spain, Randall D
2008-01-01
...) workload management. This paper will address recent efforts, tools, and approaches on measuring and analyzing two of these distributed cognitive attributes through network analysis, coordination across agents and mental models...
Fusion-component lifetime analysis
Mattas, R.F.
1982-09-01
A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR
Component of the risk analysis
Martinez, I.; Campon, G.
2013-01-01
The power point presentation reviews issues like analysis of risk (Codex), management risk, preliminary activities manager, relationship between government and industries, microbiological danger and communication of risk
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
1993-06-30
UNIVERSITY DEPARTMENT OF PSYCHOLOGY EDUCATION BUILDING TALLAHASSEE FL 32306 W LAFAYETTE IN 47907 DR NEIL DORANS DR RODNEY COCKING EDUCATIONAL TESTING SERVICE...DR LORRAINE D EYDE PSYCHOLOGY DEPARTMENT US OFFICE OF PERSONNEL MGMT IOWA STATE UNIVERSITY OFFICE OF PERSONNEL RESEARCH AMES IA 50010 AND DEVELOP...judgments were performed under the added cognitive load of the coordination task. Method Subjects A total of eighty subjects were tested , with one
Quantum functional analysis non-coordinate approach
Helemskii, A Ya
2010-01-01
This book contains a systematic presentation of quantum functional analysis, a mathematical subject also known as operator space theory. Created in the 1980s, it nowadays is one of the most prominent areas of functional analysis, both as a field of active research and as a source of numerous important applications. The approach taken in this book differs significantly from the standard approach used in studying operator space theory. Instead of viewing "quantized coefficients" as matrices in a fixed basis, in this book they are interpreted as finite rank operators in a fixed Hilbert space. This allows the author to replace matrix computations with algebraic techniques of module theory and tensor products, thus achieving a more invariant approach to the subject. The book can be used by graduate students and research mathematicians interested in functional analysis and related areas of mathematics and mathematical physics. Prerequisites include standard courses in abstract algebra and functional analysis.
Edinger, Janick; Pai, Dinesh K; Spering, Miriam
2017-01-01
The neural control of pursuit eye movements to visual textures that simultaneously translate and rotate has largely been neglected. Here we propose that pursuit of such targets-texture pursuit-is a fully three-dimensional task that utilizes all three degrees of freedom of the eye, including torsion. Head-fixed healthy human adults (n = 8) tracked a translating and rotating random dot pattern, shown on a computer monitor, with their eyes. Horizontal, vertical, and torsional eye positions were recorded with a head-mounted eye tracker. The torsional component of pursuit is a function of the rotation of the texture, aligned with its visual properties. We observed distinct behaviors between those trials in which stimulus rotation was in the same direction as that of a rolling ball ("natural") in comparison to those with the opposite rotation ("unnatural"): Natural rotation enhanced and unnatural rotation reversed torsional velocity during pursuit, as compared to torsion triggered by a nonrotating random dot pattern. Natural rotation also triggered pursuit with a higher horizontal velocity gain and fewer and smaller corrective saccades. Furthermore, we show that horizontal corrective saccades are synchronized with torsional corrective saccades, indicating temporal coupling of horizontal and torsional saccade control. Pursuit eye movements have a torsional component that depends on the visual stimulus. Horizontal and torsional eye movements are separated in the motor periphery. Our findings suggest that translational and rotational motion signals might be coordinated in descending pursuit pathways.
COPD phenotype description using principal components analysis
Roy, Kay; Smith, Jacky; Kolsum, Umme
2009-01-01
BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua
2015-01-01
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior
NEPR Principle Component Analysis - NOAA TIFF Image
National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...
Structured Performance Analysis for Component Based Systems
Salmi , N.; Moreaux , Patrice; Ioualalen , M.
2012-01-01
International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...
Constrained principal component analysis and related techniques
Takane, Yoshio
2013-01-01
In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre
Time Series Analysis of 3D Coordinates Using Nonstochastic Observations
Velsink, H.
2016-01-01
Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on
Time Series Analysis of 3D Coordinates Using Nonstochastic Observations
Hiddo Velsink
2016-01-01
From the article: Abstract Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to
Analysis Method for Integrating Components of Product
Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)
2017-04-15
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Analysis Method for Integrating Components of Product
Choi, Jun Ho; Lee, Kun Sang
2017-01-01
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
Principal component analysis of NEXAFS spectra for molybdenum speciation in hydrotreating catalysts
Faro Junior, Arnaldo da C.; Rodrigues, Victor de O.; Eon, Jean-G.; Rocha, Angela S.
2010-01-01
Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L III and L II edges. The techniques of principal component analysis (PCA) together with a linear combination analysis (LCA) allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination. (author)
Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis
Sean eAndrist
2015-07-01
Full Text Available When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partner's referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which people's gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participant's gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1 properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2 optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3 differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.
Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)
Candey, Robert M.
2010-01-01
The Coordinated Data Analysis Web (CDAWeb) data browsing system provides plotting, listing and open access v ia FTP, HTTP, and web services (REST, SOAP, OPeNDAP) for data from mo st NASA Heliophysics missions and is heavily used by the community. C ombining data from many instruments and missions enables broad resear ch analysis and correlation and coordination with other experiments a nd missions. Crucial to its effectiveness is the use of a standard se lf-describing data format, in this case, the Common Data Format (CDF) , also developed at the Space Physics Data facility , and the use of metadata standa rds (easily edited with SKTeditor ). CDAweb is based on a set of IDL routines, CDAWlib . . The CDF project also maintains soft ware and services for translating between many standard formats (CDF. netCDF, HDF, FITS, XML) .
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Experimental and principal component analysis of waste ...
The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...
Principal Component Analysis as an Efficient Performance ...
This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...
Independent component analysis for understanding multimedia content
Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan
2002-01-01
Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...
Impact of Inter- and Intra-Regional Coordination in Markets With a Large Renewable Component
Delikaraoglou, Stefanos; Morales González, Juan Miguel; Pinson, Pierre
2016-01-01
counterproductive or inefficient under uncertain supply, e.g., from weather-driven renewable power generation. In the absence of a specific target model for the common balancing market in Europe, we introduce a framework to compare different coordination schemes and market organizations. The proposed models......The establishment of the single European day-ahead market has accomplished a crucial step towards the spatial integration of the European power system. However, this new arrangement does not consider any intra-regional coordination of day-ahead and balancing markets and thus may become...... are formulated as stochastic equilibrium problems and compared against an optimal market setup. The simulation results reveal significant efficiency loss in case of partial coordination and diversity of market structure among regional power systems....
Normal co-ordinate analysis of 1, 8-dibromooctane
Singh, Devinder; Jaggi, Neena; Singh, Nafa
2010-02-01
The organic compound 1,8-dibromooctane (1,8-DBO) exists in liquid phase at ambient temperatures and has versatile synthetic applications. In its liquid phase 1,8-DBO has been expected to exist in four most probable conformations, with all its carbon atoms in the same plane, having symmetries C 2h , C i , C 2 and C 1 . In the present study a detailed vibrational analysis in terms of assignment of Fourier transform infrared (FT-IR) and Raman bands of this molecule using normal co-ordinate calculations has been done. A systematic set of symmetry co-ordinates has been constructed for this molecule and normal co-ordinate analysis is carried out using the computer program MOLVIB. The force-field transferred from already studied lower chain bromo-alkanes is subjected to refinement so as to fit the observed infrared and Raman frequencies with those of calculated ones. The potential energy distribution (PED) has also been calculated for each mode of vibration of the molecule for the assumed conformations.
Probabilistic Principal Component Analysis for Metabolomic Data.
Nyamundanda, Gift
2010-11-23
Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.
PCA: Principal Component Analysis for spectra modeling
Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas
2012-07-01
The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.
BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS
Andrea Giovanni Spelta
2007-05-01
Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.
ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.
Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.
2011-01-01
ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.
Improvement of Binary Analysis Components in Automated Malware Analysis Framework
2017-02-21
AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program
Fault tree analysis with multistate components
Caldarola, L.
1979-02-01
A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Correlation analysis of respiratory signals by using parallel coordinate plots.
Saatci, Esra
2018-01-01
The understanding of the bonds and the relationships between the respiratory signals, i.e. the airflow, the mouth pressure, the relative temperature and the relative humidity during breathing may provide the improvement on the measurement methods of respiratory mechanics and sensor designs or the exploration of the several possible applications in the analysis of respiratory disorders. Therefore, the main objective of this study was to propose a new combination of methods in order to determine the relationship between respiratory signals as a multidimensional data. In order to reveal the coupling between the processes two very different methods were used: the well-known statistical correlation analysis (i.e. Pearson's correlation and cross-correlation coefficient) and parallel coordinate plots (PCPs). Curve bundling with the number intersections for the correlation analysis, Least Mean Square Time Delay Estimator (LMS-TDE) for the point delay detection and visual metrics for the recognition of the visual structures were proposed and utilized in PCP. The number of intersections was increased when the correlation coefficient changed from high positive to high negative correlation between the respiratory signals, especially if whole breath was processed. LMS-TDE coefficients plotted in PCP indicated well-matched point delay results to the findings in the correlation analysis. Visual inspection of PCB by visual metrics showed range, dispersions, entropy comparisons and linear and sinusoidal-like relationships between the respiratory signals. It is demonstrated that the basic correlation analysis together with the parallel coordinate plots perceptually motivates the visual metrics in the display and thus can be considered as an aid to the user analysis by providing meaningful views of the data. Copyright © 2017 Elsevier B.V. All rights reserved.
A Genealogical Interpretation of Principal Components Analysis
McVean, Gil
2009-01-01
Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557
Radar fall detection using principal component analysis
Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem
2016-05-01
Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.
Independent Component Analysis in Multimedia Modeling
Larsen, Jan
2003-01-01
largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...
Backhouse, Amy; Richards, David A; McCabe, Rose; Watkins, Ross; Dickens, Chris
2017-11-22
Interventions aiming to coordinate services for the community-based dementia population vary in components, organisation and implementation. In this review we aimed to investigate the views of stakeholders on the key components of community-based interventions coordinating care in dementia. We searched four databases from inception to June 2015; Medline, The Cochrane Library, EMBASE and PsycINFO, this was aided by a search of four grey literature databases, and backward and forward citation tracking of included papers. Title and abstract screening was followed by a full text screen by two independent reviewers, and quality was assessed using the CASP appraisal tool. We then conducted thematic synthesis on extracted data. A total of seven papers from five independent studies were included in the review, and encompassed the views of over 100 participants from three countries. Through thematic synthesis we identified 32 initial codes that were grouped into 5 second-order themes: (1) case manager had four associated codes and described preferences for the case manager personal and professional attributes, including a sound knowledge in dementia and availability of local services; (2) communication had five associated codes and emphasized the importance stakeholders placed on multichannel communication with service users, as well as between multidisciplinary teams and across organisations; (3) intervention had 11 associated codes which focused primarily on the practicalities of implementation such as the contact type and frequency between case managers and service users, and the importance of case manager training and service evaluation; (4) resources had five associated codes which outlined stakeholder views on the required resources for coordinating interventions and potential overlap with existing resources, as well as arising issues when available resources do not meet those required for successful implementation; and (5) support had seven associated codes that
Amy Backhouse
2017-11-01
Full Text Available Abstract Background Interventions aiming to coordinate services for the community-based dementia population vary in components, organisation and implementation. In this review we aimed to investigate the views of stakeholders on the key components of community-based interventions coordinating care in dementia. Methods We searched four databases from inception to June 2015; Medline, The Cochrane Library, EMBASE and PsycINFO, this was aided by a search of four grey literature databases, and backward and forward citation tracking of included papers. Title and abstract screening was followed by a full text screen by two independent reviewers, and quality was assessed using the CASP appraisal tool. We then conducted thematic synthesis on extracted data. Results A total of seven papers from five independent studies were included in the review, and encompassed the views of over 100 participants from three countries. Through thematic synthesis we identified 32 initial codes that were grouped into 5 second-order themes: (1 case manager had four associated codes and described preferences for the case manager personal and professional attributes, including a sound knowledge in dementia and availability of local services; (2 communication had five associated codes and emphasized the importance stakeholders placed on multichannel communication with service users, as well as between multidisciplinary teams and across organisations; (3 intervention had 11 associated codes which focused primarily on the practicalities of implementation such as the contact type and frequency between case managers and service users, and the importance of case manager training and service evaluation; (4 resources had five associated codes which outlined stakeholder views on the required resources for coordinating interventions and potential overlap with existing resources, as well as arising issues when available resources do not meet those required for successful implementation
Analysis of spiral components in 16 galaxies
Considere, S.; Athanassoula, E.
1988-01-01
A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies
Structural analysis of NPP components and structures
Saarenheimo, A.; Keinaenen, H.; Talja, H.
1998-01-01
Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)
Reformulating Component Identification as Document Analysis Problem
Gross, H.G.; Lormans, M.; Zhou, J.
2007-01-01
One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can
Nonlinear principal component analysis and its applications
Mori, Yuichi; Makino, Naomichi
2016-01-01
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...
Principal Component Analysis In Radar Polarimetry
A. Danklmayer
2005-01-01
Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.
Component fragilities - data collection, analysis and interpretation
Bandyopadhyay, K.K.; Hofmayer, C.H.
1986-01-01
As part of the component fragility research program sponsored by the US Nuclear Regulatory Commission, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment, by identifying, collecting and analyzing existing test data from various sources. BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices of various manufacturers and models. Through a cooperative agreement, BNL has also obtained test data from EPRI/ANCO. An analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. An extensive amount of additional fragility or high level test data exists. If completely collected and properly analyzed, the entire data bank is expected to greatly reduce the need for additional testing to establish fragility levels for most equipment
Component fragilities. Data collection, analysis and interpretation
Bandyopadhyay, K.K.; Hofmayer, C.H.
1985-01-01
As part of the component fragility research program sponsored by the US NRC, BNL is involved in establishing seismic fragility levels for various nuclear power plant equipment with emphasis on electrical equipment. To date, BNL has reviewed approximately seventy test reports to collect fragility or high level test data for switchgears, motor control centers and similar electrical cabinets, valve actuators and numerous electrical and control devices, e.g., switches, transmitters, potentiometers, indicators, relays, etc., of various manufacturers and models. BNL has also obtained test data from EPRI/ANCO. Analysis of the collected data reveals that fragility levels can best be described by a group of curves corresponding to various failure modes. The lower bound curve indicates the initiation of malfunctioning or structural damage, whereas the upper bound curve corresponds to overall failure of the equipment based on known failure modes occurring separately or interactively. For some components, the upper and lower bound fragility levels are observed to vary appreciably depending upon the manufacturers and models. For some devices, testing even at the shake table vibration limit does not exhibit any failure. Failure of a relay is observed to be a frequent cause of failure of an electrical panel or a system. An extensive amount of additional fregility or high level test data exists
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi
2015-01-02
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.
Protein structure similarity from principle component correlation analysis
Chou James
2006-01-01
Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum
Piepel, Gregory F.; Cooley, Scott K.; Jones, Bradley
2005-01-01
This paper describes the solution to a unique and challenging mixture experiment design problem involving: (1) 19 and 21 components for two different parts of the design, (2) many single-component and multi-component constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. The statistical objective was to develop an experimental design by augmenting existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The existing 144 glasses were expressed as 19-component nonradioactive compositions and then augmented with 40 new nonradioactive glasses. These included 8 glasses on the outer layer of the region, 27 glasses on an inner layer, 2 replicate glasses at the centroid, and one replicate each of three existing glasses. Then, the 144 + 40 = 184 glasses were expressed as 21-component radioactive compositions, and augmented with 5 radioactive glasses. A D-optimal design algorithm was used to select the new outer layer, inner layer, and radioactive glasses. Several statistical software packages can generate D-optimal experimental designs, but nearly all of them require a set of candidate points (e.g., vertices) from which to select design points. The large number of components (19 or 21) and many constraints made it impossible to generate the huge number of vertices and other typical candidate points. JMP was used to select design points without candidate points. JMP uses a coordinate-exchange algorithm modified for mixture experiments, which is discussed and illustrated in the paper
Geomorphometric analysis of selected Martian craters using polar coordinate transformation
Magyar, Zoltán; Koma, Zsófia; Székely, Balázs
2016-04-01
Centrally symmetric landform elements are very common features on the surface of the planet Mars. The most conspicuous ones of them are the impact craters of various size. However, a closer look on these features reveals that they show often asymmetric patterns as well. These are partially related to the geometry of the trajectory of the impacting body, but sometimes it is a result of surface processes (e.g., freeze/thaw cycles, mass movements). Geomorphometric studies have already been carried out to reveal these pecularities. Our approach, the application of polar coordinate transformation (PCT) very sensitively enhances the non-radial and non-circular shapes. We used digital terrain models (DTMs) derived from the ESA Mars Express HRSC imagery. The original DTM or its derivatives (e.g. slope angle or aspect) are PCT transformed. We analyzed the craters inter alia with scattergrams in polar coordinates. The resulting point cloud can be used directly for the analysis, but in some cases an interpolation should be applied to enhance certain non-circular features (especially in case of smaller craters). Visual inspection of the crater slopes, coloured by the aspect, reveals smaller features. Some of them are processing artefacts, but many of them are related to local undulations in the topography or indications of mass movements. In many cases the undulations of the crater rim are due to erosional processes. The drawbacks of the technology are related to the uneven resolution of the projected image: features in the crater centre should be left out from the analysis because PCT has a low resolution around the projection center. Furthermore, the success of the PCT depends on the correct definition of the projection centre: erroneously centered images are not suitable for analysis. The PCT transformed images are also suitable for radial averaging and calculation of standard deviations, resulting in typical, comparable craters shapes. These studies may lead to a deeper
Value-Driven Risk Analysis of Coordination Models
Ionita, Dan; Gordijn, Jaap; Yesuf, Ahmed Seid; Wieringa, Roelf J.
2016-01-01
Coordination processes are business processes that involve independent profit-and-loss responsible business actors who collectively provide something of value to a customer. Coordination processes are meant to be profitable for the business actors that execute them. However, because business actors
Facilitation of the PED analysis of large molecules by using global coordinates.
Jamróz, Michał H; Ostrowski, Sławomir; Dobrowolski, Jan Cz
2015-10-05
Global coordinates have been found to be useful in the potential energy distribution (PED) analyses of the following large molecules: [13]-acene and [33]-helicene. The global coordinate is defined based on much distanced fragments of the analysed molecule, whereas so far, the coordinates used in the analysis were based on stretchings, bendings, or torsions of the adjacent atoms. It has been shown that the PED analyses performed using the global coordinate and the classical ones can lead to exactly the same PED contributions. The global coordinates may significantly improve the facility of the analysis of the vibrational spectra of large molecules. Copyright © 2015 Elsevier B.V. All rights reserved.
Pastore, G.; Tosi, M.P.
1995-11-01
Earlier work has identified the metal ion size R M as a relevant parameter in determining the evolution of the liquid structure of trivalent metal chlorides across the series from LaCl 3 (R M approx. 1.4 A) to AlCl 3 (R M approx. 0.8 A). Here we highlight the structural role of the chlorines by contrasting the structure of fully equilibrated melts with that of disordered systems obtained by quenching the chlorine component. Main attention is given to how the suppression of screening of the polyvalent ions by the chlorines changes trends in the local liquid structure (first neighbour coordination and partial radial distribution functions) and in the intermediate range order (first sharp diffraction peak in the partial structure factors). The main microscopic consequences of structural quenching of the chlorine component are a reduction in short range order and an enhancement of intermediate range order in the metal ion component, as well as the suppression of a tendency to molecular-type states at the lower end of the range of R M . (author). 23 refs, 6 figs
Group-wise Principal Component Analysis for Exploratory Data Analysis
Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.
2017-01-01
In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new
Thermogravimetric analysis of combustible waste components
Munther, Anette; Wu, Hao; Glarborg, Peter
In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper, polypro......In order to gain fundamental knowledge about the co-combustion of coal and waste derived fuels, the pyrolytic behaviors of coal, four typical waste components and their mixtures have been studied by a simultaneous thermal analyzer (STA). The investigated waste components were wood, paper...
A NEW THREE-DIMENSIONAL SOLAR WIND MODEL IN SPHERICAL COORDINATES WITH A SIX-COMPONENT GRID
Feng, Xueshang; Zhang, Man; Zhou, Yufen, E-mail: fengx@spaceweather.ac.cn [SIGMA Weather Group, State Key Laboratory for Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China)
2014-09-01
In this paper, we introduce a new three-dimensional magnetohydrodynamics numerical model to simulate the steady state ambient solar wind from the solar surface to 215 R {sub s} or beyond, and the model adopts a splitting finite-volume scheme based on a six-component grid system in spherical coordinates. By splitting the magnetohydrodynamics equations into a fluid part and a magnetic part, a finite volume method can be used for the fluid part and a constrained-transport method able to maintain the divergence-free constraint on the magnetic field can be used for the magnetic induction part. This new second-order model in space and time is validated when modeling the large-scale structure of the solar wind. The numerical results for Carrington rotation 2064 show its ability to produce structured solar wind in agreement with observations.
Motion Intention Analysis-Based Coordinated Control for Amputee-Prosthesis Interaction
Fei Wang
2010-01-01
Full Text Available To study amputee-prosthesis (AP interaction, a novel reconfigurable biped robot was designed and fabricated. In homogeneous configuration, two identical artificial legs (ALs were used to simulate the symmetrical lower limbs of a healthy person. Linear inverted pendulum model combining with ZMP stability criterion was used to generate the gait trajectories of ALs. To acquire interjoint coordination for healthy gait, rate gyroscopes were mounted on CoGs of thigh and shank of both legs. By employing principal component analysis, the measured angular velocities were processed and the motion synergy was obtained in the final. Then, one of two ALs was replaced by a bionic leg (BL, and the biped robot was changed into heterogeneous configuration to simulate the AP coupling system. To realize symmetrical stable walking, master/slave coordinated control strategy is proposed. According to information acquired by gyroscopes, BL recognized the motion intention of AL and reconstructed its kinematic variables based on interjoint coordination. By employing iterative learning control, gait tracking of BL to AL was archived. Real environment robot walking experiments validated the correctness and effectiveness of the proposed scheme.
SNIa detection in the SNLS photometric analysis using Morphological Component Analysis
Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.; Palanque-Delabrouille, N. [Irfu, SPP, CEA Saclay, F-91191 Gif sur Yvette cedex (France); Lanusse, F.; Starck, J.-L., E-mail: anais.moller@cea.fr, E-mail: vanina.ruhlmann-kleider@cea.fr, E-mail: francois.lanusse@cea.fr, E-mail: jeremy.neveu@cea.fr, E-mail: nathalie.palanque-delabrouille@cea.fr, E-mail: jstarck@cea.fr [Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp, CEA Saclay, F-91191 Gif sur Yvette cedex (France)
2015-04-01
Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.
Quantifying identifiability in independent component analysis
Sokol, Alexander; Maathuis, Marloes H.; Falkeborg, Benjamin
2014-01-01
We are interested in consistent estimation of the mixing matrix in the ICA model, when the error distribution is close to (but different from) Gaussian. In particular, we consider $n$ independent samples from the ICA model $X = A\\epsilon$, where we assume that the coordinates of $\\epsilon......$ are independent and identically distributed according to a contaminated Gaussian distribution, and the amount of contamination is allowed to depend on $n$. We then investigate how the ability to consistently estimate the mixing matrix depends on the amount of contamination. Our results suggest...
Coordination Analysis Using Global Structural Constraints and Alignment-based Local Features
Hara, Kazuo; Shimbo, Masashi; Matsumoto, Yuji
We propose a hybrid approach to coordinate structure analysis that combines a simple grammar to ensure consistent global structure of coordinations in a sentence, and features based on sequence alignment to capture local symmetry of conjuncts. The weight of the alignment-based features, which in turn determines the score of coordinate structures, is optimized by perceptron training on a given corpus. A bottom-up chart parsing algorithm efficiently finds the best scoring structure, taking both nested or non-overlapping flat coordinations into account. We demonstrate that our approach outperforms existing parsers in coordination scope detection on the Genia corpus.
Analysis of failed nuclear plant components
Diercks, D. R.
1993-12-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.
Analysis of failed nuclear plant components
Diercks, D.R.
1993-01-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor
Analysis of failed nuclear plant components
Diercks, D.R.
1992-07-01
Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor
A radiographic analysis of implant component misfit.
Sharkey, Seamus
2011-07-01
Radiographs are commonly used to assess the fit of implant components, but there is no clear agreement on the amount of misfit that can be detected by this method. This study investigated the effect of gap size and the relative angle at which a radiograph was taken on the detection of component misfit. Different types of implant connections (internal or external) and radiographic modalities (film or digital) were assessed.
Lifetime analysis of fusion-reactor components
Mattas, R.F.
1983-01-01
A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO
Mapping ash properties using principal components analysis
Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones
2017-04-01
In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2
Generalized structured component analysis a component-based approach to structural equation modeling
Hwang, Heungsun
2014-01-01
Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...
Major component analysis of dynamic networks of physiologic organ interactions
Liu, Kang K L; Ma, Qianli D Y; Ivanov, Plamen Ch; Bartsch, Ronny P
2015-01-01
The human organism is a complex network of interconnected organ systems, where the behavior of one system affects the dynamics of other systems. Identifying and quantifying dynamical networks of diverse physiologic systems under varied conditions is a challenge due to the complexity in the output dynamics of the individual systems and the transient and nonlinear characteristics of their coupling. We introduce a novel computational method based on the concept of time delay stability and major component analysis to investigate how organ systems interact as a network to coordinate their functions. We analyze a large database of continuously recorded multi-channel physiologic signals from healthy young subjects during night-time sleep. We identify a network of dynamic interactions between key physiologic systems in the human organism. Further, we find that each physiologic state is characterized by a distinct network structure with different relative contribution from individual organ systems to the global network dynamics. Specifically, we observe a gradual decrease in the strength of coupling of heart and respiration to the rest of the network with transition from wake to deep sleep, and in contrast, an increased relative contribution to network dynamics from chin and leg muscle tone and eye movement, demonstrating a robust association between network topology and physiologic function. (paper)
Principal component analysis of psoriasis lesions images
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...
Generator Coordinate Method Analysis of Xe and Ba Isotopes
Higashiyama, Koji; Yoshinaga, Naotaka; Teruya, Eri
Nuclear structure of Xe and Ba isotopes is studied in terms of the quantum-number projected generator coordinate method (GCM). The GCM reproduces well the energy levels of high-spin states as well as low-lying states. The structure of the low-lying states is analyzed through the GCM wave functions.
An in-depth analysis of theoretical frameworks for the study of care coordination
Sabine Van Houdt
2013-06-01
Full Text Available Introduction: Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frameworks and clarify key concepts related to care coordination. Methods: We performed a literature review to update existing theoretical frameworks. An in-depth analysis of these theoretical frameworks was conducted to formulate key concepts related to care coordination.Results: Our literature review found seven previously unidentified theoretical frameworks for studying care coordination. The in-depth analysis identified fourteen key concepts that the theoretical frameworks addressed. These were ‘external factors’, ‘structure’, ‘tasks characteristics’, ‘cultural factors’, ‘knowledge and technology’, ‘need for coordination’, ‘administrative operational processes’, ‘exchange of information’, ‘goals’, ‘roles’, ‘quality of relationship’, ‘patient outcome’, ‘team outcome’, and ‘(interorganizational outcome’.Conclusion: These 14 interrelated key concepts provide a base to develop or choose a framework for studying care coordination. The relational coordination theory and the multi-level framework are interesting as these are the most comprehensive.
An anatomically oriented breast coordinate system for mammogram analysis
Brandt, Sami; Karemore, Gopal; Karssemeijer, Nico
2011-01-01
and the shape of the breast boundary because these are the most robust features independent of the breast size and shape. On the basis of these landmarks, we have constructed a nonlinear mapping between the parameter frame and the breast region in the mammogram. This mapping makes it possible to identify...... the corresponding positions and orientations among all of the ML or MLO mammograms, which facilitates an implicit use of the registration, i.e., no explicit image warping is needed. We additionally show how the coordinate transform can be used to extract Gaussian derivative features so that the feature positions...... and orientations are registered and extracted without non-linearly deforming the images. We use the proposed breast coordinate transform in a cross-sectional breast cancer risk assessment study of 490 women, in which we attempt to learn breast cancer risk factors from mammograms that were taken prior to when...
Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition
Chang Liu
2014-01-01
Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.
Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.
2010-01-01
The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social
Columbia River Component Data Gap Analysis
L. C. Hulstrom
2007-10-23
This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.
Use of Sparse Principal Component Analysis (SPCA) for Fault Detection
Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet
2016-01-01
Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...
Langley, R.A.
1995-12-01
The proceedings and results of the 1st IAEA research Coordination Meeting on ''Tritium Retention in Fusion Reactor Plasma Facing Components'' held on October 5 and 6, 1995 at the IAEA Headquarters in Vienna are briefly described. This report includes a summary of presentations made by the meeting participants, the results of a data survey and needs assessment for the retention, release and removal of tritium from plasma facing components, a summary of data evaluation, and recommendations regarding future work. (author). 4 tabs
Projection and analysis of nuclear components
Heeschen, U.
1980-01-01
The classification and the types of analysis carried out in pipings for quality control and safety of nuclear power plants, are presented. The operation and emergency conditions with emphasis of possible simplifications of calculations are described. (author/M.C.K.) [pt
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....
Nonparametric inference in nonlinear principal components analysis : exploration and beyond
Linting, Mariëlle
2007-01-01
In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),
Principal component analysis networks and algorithms
Kong, Xiangyu; Duan, Zhansheng
2017-01-01
This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.
Blind source separation dependent component analysis
Xiang, Yong; Yang, Zuyuan
2015-01-01
This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.
Kernel principal component analysis for change detection
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Real Time Engineering Analysis Based on a Generative Component Implementation
Kirkegaard, Poul Henning; Klitgaard, Jens
2007-01-01
The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Analysis of coordination polyhedra symmetry in pyrochlore and zirconolite structures
Troole, A.Y.; Stefanovsky, S.V.
1999-01-01
Zirconolite and pyrochlore are considered as promising host phases for high level waste (HLW). However, correct information on substitution mechanisms, forms of dopants incorporation in their structures and distortions in coordination polyhedra is presently unavailable. To clarify these points the authors use the electron paramagnetic resonance (EPR). Pyrochlore and three of zirconolite polytypes: zirconolite-2M, zirconolite-3T, and zirconolite-3O are considered. Pyrochlore is the parent structure for zirconolite since any zirconolite variety is produced by means of distortion of the initial pyrochlore structure. Space groups of pyrochlore and basic polymorphous zirconolite varieties found from XRD and TEM data, as well as interatomic distances and angles, were taken from reference data. This allows the determination of the most probable sites for impurities, substitution mechanisms, and local symmetry of coordination polyhedra (initial). Ions chosen for EPR were Gd(III) as an analog of trivalent rare earth and actinide elements which are also occurred in HLW and Fe(III) as a typical corrosion product which occurs in all HLW. For Gd(III) a strong ligand field approximation is suggested, theoretical computation using perturbation theory in this approximation has been carried out. All the non-diagonal members plus magnetic field were chosen as perturbation and formulate for transition frequencies, estimations of fine structure and g-factors parameters in the given approximation have been obtained
Tyobeka, Bismark; Reitsma, Frederik; Ivanov, Kostadin
2011-01-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis and uncertainty analysis methods. In order to benefit from recent advances in modeling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Uncertainty and sensitivity studies are an essential component of any significant effort in data and simulation improvement. In February 2009, the Technical Working Group on Gas-Cooled Reactors recommended that the proposed IAEA Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling be implemented. In the paper the current status and plan are presented. The CRP will also benefit from interactions with the currently ongoing OECD/NEA Light Water Reactor (LWR) UAM benchmark activity by taking into consideration the peculiarities of HTGR designs and simulation requirements. (author)
Im, Han Su; Lee, Eunji; Lee, Shim Sung; Kim, Tae Ho; Park, Ki Min [Research Institute of Natural Science and Dept. of Chemistry, Gyeongsang National University, Jinju (Korea, Republic of); Moon, Suk Hee [Dept. of Food and Nutrition, Kyungnam College of Information and Technology, Busan (Korea, Republic of)
2017-01-15
In supramolecular chemistry, a lot of mechanically poly-threaded coordination polymers, such as polyrotaxanes, based on self-assembly of organic ligands and transition metal ions have attracted great attention over the past two decades because of their fascinating architectures as well as their potential application in material science. Among them, 1D + 2D → 3D pseudo-polyrotaxane constructed by the penetration of 1D coordination polymer chains into 1D channels formed by parallel stacking of 2D porous coordination layers is a quite rare topology. Until now, only a few examples of 1D + 2D → 3D pseudo-polyrotaxanes have been reported.
Im, Han Su; Lee, Eunji; Lee, Shim Sung; Kim, Tae Ho; Park, Ki Min; Moon, Suk Hee
2017-01-01
In supramolecular chemistry, a lot of mechanically poly-threaded coordination polymers, such as polyrotaxanes, based on self-assembly of organic ligands and transition metal ions have attracted great attention over the past two decades because of their fascinating architectures as well as their potential application in material science. Among them, 1D + 2D → 3D pseudo-polyrotaxane constructed by the penetration of 1D coordination polymer chains into 1D channels formed by parallel stacking of 2D porous coordination layers is a quite rare topology. Until now, only a few examples of 1D + 2D → 3D pseudo-polyrotaxanes have been reported
Problems of stress analysis of fuelling machine head components
Mathur, D.D.
1975-01-01
The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)
Coordination Frictions and Job Heterogeneity: A Discrete Time Analysis
Kennes, John; Le Maire, Christian Daniel
This paper develops and extends a dynamic, discrete time, job to worker matching model in which jobs are heterogeneous in equilibrium. The key assumptions of this economic environment are (i) matching is directed and (ii) coordination frictions lead to heterogeneous local labor markets. We de- rive...... a number of new theoretical results, which are essential for the empirical application of this type of model to matched employer-employee microdata. First, we o¤er a robust equilibrium concept in which there is a continu- ous dispersion of job productivities and wages. Second, we show that our model can...... of these results preserve the essential tractability of the baseline model with aggregate shocks. Therefore, we o¤er a parsimonious, general equilibrium framework in which to study the process by which the contin- uous dispersion of wages and productivities varies over the business cycle for a large population...
Modelling and analysis of real-time coordination patterns
Kemper, Stephanie
2011-01-01
Present-day embedded software systems need to support an increasing number of features and formalisms; the two most important ones being handling of real-time, and the possibility to develop the system in a modular, component-based way. To ensure that the behaviour of the final system is correct
Component reliability analysis for development of component reliability DB of Korean standard NPPs
Choi, S. Y.; Han, S. H.; Kim, S. H.
2002-01-01
The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs
Definition of coordinate system for three-dimensional data analysis in the foot and ankle.
Green, Connor
2012-02-01
BACKGROUND: Three-dimensional data is required to have advanced knowledge of foot and ankle kinematics and morphology. However, studies have been difficult to compare due to a lack of a common coordinate system. Therefore, we present a means to define a coordinate frame in the foot and ankle and its clinical application. MATERIALS AND METHODS: We carried out ten CT scans in anatomically normal feet and segmented them in a general purpose segmentation program for grey value images. 3D binary formatted stereolithography files were then create and imported to a shape analysis program for biomechanics which was used to define a coordinate frame and carry out morphological analysis of the forefoot. RESULTS: The coordinate frame had axes standard deviations of 2.36 which are comparable to axes variability of other joint coordinate systems. We showed a strong correlation between the lengths of the metatarsals within and between the columns of the foot and also among the lesser metatarsal lengths. CONCLUSION: We present a reproducible method for construction of a coordinate system for the foot and ankle with low axes variability. CLINICAL RELEVANCE: To conduct meaningful comparison between multiple subjects the coordinate system must be constant. This system enables such comparison and therefore will aid morphological data collection and improve preoperative planning accuracy.
Mercado-Martínez, Francisco J; Díaz-Medina, Blanca A; Hernández-Ibarra, Eduardo
2013-09-01
Donation coordinators play an important role in the success or failure of organ donation and transplant programs. Nevertheless, these professionals' perspectives and practices have hardly been explored, particularly in low- and middle-income countries. To examine donation coordinators' discourse on the organ donation process and the barriers they perceive. A critical qualitative study was carried out in Guadalajara, Mexico. Twelve donation coordinators from public and private hospitals participated. DATA GATHERING AND ANALYSIS: Data were gathered by using semistructured interviews and critical discourse analysis. Participants indicated that partial results have been achieved in deceased organ donation. Concomitantly, multiple obstacles have adversely affected the process and outcomes: at the structural level, the fragmentation of the health system and the scarcity of financial and material resources; at the relational level, nonegalitarian relationships between coordinators and hospital personnel; at the ideational level, the transplant domain and its specialists overshadow the donation domain and its coordinators. Negative images are associated with donation coordinators. Organ donation faces structural, relational, and ideational barriers; hence, complex interventions should be undertaken. Donation coordinators also should be recognized by the health system.
Principal Component Analysis of Body Measurements In Three ...
This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.
Tomato sorting using independent component analysis on spectral images
Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.
2003-01-01
Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components
Key components of financial-analysis education for clinical nurses.
Lim, Ji Young; Noh, Wonjung
2015-09-01
In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.
A coordination class analysis of college students' judgments about animated motion
Thaden-Koch, Thomas Christian
The coordination class construct was invented by di5essa and Sherin to clarify what it means to learn and use scientific concepts. A coordination class is defined to consist of readout strategies, which guide observation, and the causal net, which contains knowledge necessary for making inferences from observations. A coordination class, as originally specified, reliably extracts a certain class of information from a variety of situations. The coordination class construct is relatively new. To examine its utility, transcripts of interviews with college students were analyzed in terms of the coordination class construct. In the interviews, students judged the realism of several computer animations depicting balls rolling on a pair of tracks. When shown animations with only one ball, students made judgments consistent with focusing on the ball's speed changes. Adding a second ball to each animation strongly affected judgments made by students taking introductory physics courses, but had a smaller effect on judgments made by students taking a psychology course. Reasoning was described in this analysis as the coordination of readouts about animations with causal net elements related to realistic motion. Decision-making was characterized both for individual students and for groups by the causal net elements expressed, by the types of readouts reported, and by the coordination processes involved. The coordination class construct was found useful for describing the elements and processes of student decision-making, but little evidence was found to suggest that the students studied possessed reliable coordination classes. Students' causal nets were found to include several appropriate expectations about realistic motion. Several students reached judgments that appeared contrary to their expectations and reported mutually incompatible expectations. Descriptions of students' decision-making processes often included faulty readouts, or feedback loops in which causal net
Dynamic Modal Analysis of Vertical Machining Centre Components
Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh
2009-01-01
The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...
Pursit-evasion game analysis in a line of sight coordinate system
Shinar, J.; Davidovitz, A.
1985-01-01
The paper proposes to use line of sight coordinates for the analysis of pursuit-evasion games. The advantage of this method for two-target games is shown to be evident. As a demonstrative example the game of two identical cars is formulated and solved in such coordinate systems. A new type of singular surface, overlooked in a previous study of the same problem, is discovered as a consequence of the simplicity of the solution.
Coordinate based random effect size meta-analysis of neuroimaging studies.
Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J
2017-06-01
Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Wang, J.; Pu, Z. Y.; Fu, S. Y.; Wang, X. G.; Xiao, C. J.; Dunlop, M. W.; Wei, Y.; Bogdanova, Y. V.; Zong, Q. G.; Xie, L.
2011-05-01
Previous theoretical and simulation studies have suggested that the anti-parallel and component reconnection can occur simultaneously on the dayside magnetopause. Certain observations have also been reported to support global conjunct pattern of magnetic reconnection. Here, we show direct evidence for the conjunction of anti-parallel and component MR using coordinated observations of Double Star TC-1 and Cluster under the same IMF condition on 6 April, 2004. The global MR X-line configuration constructed is in good agreement with the “S-shape” model.
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...
System diagnostics using qualitative analysis and component functional classification
Reifman, J.; Wei, T.Y.C.
1993-01-01
A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures
Multistage principal component analysis based method for abdominal ECG decomposition
Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas
2015-01-01
Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)
Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index
Zhiliang Wang
2014-01-01
Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.
Sparse Principal Component Analysis in Medical Shape Modeling
Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus
2006-01-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...
Efficacy of the Principal Components Analysis Techniques Using ...
Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Barton, Ellen J.; Sparks, David L.
2013-01-01
Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibuloocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently. PMID:18458891
Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis
CHEN, R.
2017-11-01
Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.
Clinical usefulness of physiological components obtained by factor analysis
Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.
1989-01-01
The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)
Independent component analysis based filtering for penumbral imaging
Chen Yenwei; Han Xianhua; Nozaki, Shinya
2004-01-01
We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters
Numerical analysis of magnetoelastic coupled buckling of fusion reactor components
Demachi, K.; Yoshida, Y.; Miya, K.
1994-01-01
For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated
Computer compensation for NMR quantitative analysis of trace components
Nakayama, T.; Fujiwara, Y.
1981-01-01
A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA
Multi-component separation and analysis of bat echolocation calls.
DiCecco, John; Gaudette, Jason E; Simmons, James A
2013-01-01
The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.
Towards successful coordination of electronic health record based-referrals: a qualitative analysis.
Hysong, Sylvia J; Esquivel, Adol; Sittig, Dean F; Paul, Lindsey A; Espadas, Donna; Singh, Simran; Singh, Hardeep
2011-07-27
Successful subspecialty referrals require considerable coordination and interactive communication among the primary care provider (PCP), the subspecialist, and the patient, which may be challenging in the outpatient setting. Even when referrals are facilitated by electronic health records (EHRs) (i.e., e-referrals), lapses in patient follow-up might occur. Although compelling reasons exist why referral coordination should be improved, little is known about which elements of the complex referral coordination process should be targeted for improvement. Using Okhuysen & Bechky's coordination framework, this paper aims to understand the barriers, facilitators, and suggestions for improving communication and coordination of EHR-based referrals in an integrated healthcare system. We conducted a qualitative study to understand coordination breakdowns related to e-referrals in an integrated healthcare system and examined work-system factors that affect the timely receipt of subspecialty care. We conducted interviews with seven subject matter experts and six focus groups with a total of 30 PCPs and subspecialists at two tertiary care Department of Veterans Affairs (VA) medical centers. Using techniques from grounded theory and content analysis, we identified organizational themes that affected the referral process. Four themes emerged: lack of an institutional referral policy, lack of standardization in certain referral procedures, ambiguity in roles and responsibilities, and inadequate resources to adapt and respond to referral requests effectively. Marked differences in PCPs' and subspecialists' communication styles and individual mental models of the referral processes likely precluded the development of a shared mental model to facilitate coordination and successful referral completion. Notably, very few barriers related to the EHR were reported. Despite facilitating information transfer between PCPs and subspecialists, e-referrals remain prone to coordination
Towards successful coordination of electronic health record based-referrals: a qualitative analysis
Paul Lindsey A
2011-07-01
Full Text Available Abstract Background Successful subspecialty referrals require considerable coordination and interactive communication among the primary care provider (PCP, the subspecialist, and the patient, which may be challenging in the outpatient setting. Even when referrals are facilitated by electronic health records (EHRs (i.e., e-referrals, lapses in patient follow-up might occur. Although compelling reasons exist why referral coordination should be improved, little is known about which elements of the complex referral coordination process should be targeted for improvement. Using Okhuysen & Bechky's coordination framework, this paper aims to understand the barriers, facilitators, and suggestions for improving communication and coordination of EHR-based referrals in an integrated healthcare system. Methods We conducted a qualitative study to understand coordination breakdowns related to e-referrals in an integrated healthcare system and examined work-system factors that affect the timely receipt of subspecialty care. We conducted interviews with seven subject matter experts and six focus groups with a total of 30 PCPs and subspecialists at two tertiary care Department of Veterans Affairs (VA medical centers. Using techniques from grounded theory and content analysis, we identified organizational themes that affected the referral process. Results Four themes emerged: lack of an institutional referral policy, lack of standardization in certain referral procedures, ambiguity in roles and responsibilities, and inadequate resources to adapt and respond to referral requests effectively. Marked differences in PCPs' and subspecialists' communication styles and individual mental models of the referral processes likely precluded the development of a shared mental model to facilitate coordination and successful referral completion. Notably, very few barriers related to the EHR were reported. Conclusions Despite facilitating information transfer between PCPs and
Condition monitoring with Mean field independent components analysis
Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan
2005-01-01
We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...
Independent component analysis for automatic note extraction from musical trills
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
Signal-dependent independent component analysis by tunable mother wavelets
Seo, Kyung Ho
2006-02-01
The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown
Contact- and distance-based principal component analysis of protein dynamics
Ernst, Matthias; Sittel, Florian; Stock, Gerhard, E-mail: stock@physik.uni-freiburg.de [Biomolecular Dynamics, Institute of Physics, Albert Ludwigs University, 79104 Freiburg (Germany)
2015-12-28
To interpret molecular dynamics simulations of complex systems, systematic dimensionality reduction methods such as principal component analysis (PCA) represent a well-established and popular approach. Apart from Cartesian coordinates, internal coordinates, e.g., backbone dihedral angles or various kinds of distances, may be used as input data in a PCA. Adopting two well-known model problems, folding of villin headpiece and the functional dynamics of BPTI, a systematic study of PCA using distance-based measures is presented which employs distances between C{sub α}-atoms as well as distances between inter-residue contacts including side chains. While this approach seems prohibitive for larger systems due to the quadratic scaling of the number of distances with the size of the molecule, it is shown that it is sufficient (and sometimes even better) to include only relatively few selected distances in the analysis. The quality of the PCA is assessed by considering the resolution of the resulting free energy landscape (to identify metastable conformational states and barriers) and the decay behavior of the corresponding autocorrelation functions (to test the time scale separation of the PCA). By comparing results obtained with distance-based, dihedral angle, and Cartesian coordinates, the study shows that the choice of input variables may drastically influence the outcome of a PCA.
Automatic ECG analysis using principal component analysis and wavelet transformation
Khawaja, Antoun
2007-01-01
The main objective of this book is to analyse and detect small changes in ECG waves and complexes that indicate cardiac diseases and disorders. Detecting predisposition to Torsade de Points (TDP) by analysing the beat-to-beat variability in T wave morphology is the main core of this work. The second main topic is detecting small changes in QRS complex and predicting future QRS complexes of patients. Moreover, the last main topic is clustering similar ECG components in different groups.
Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N
2018-02-01
The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not
Fatigue Reliability Analysis of Wind Turbine Cast Components
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren
2017-01-01
.) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...
Independent component analysis in non-hypothesis driven metabolomics
Li, Xiang; Hansen, Jakob; Zhao, Xinjie
2012-01-01
In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...
Kumar, Keshav; Cava, Felipe
2018-04-10
In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.
Analysis methods for structure reliability of piping components
Schimpfke, T.; Grebner, H.; Sievers, J.
2004-01-01
In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)
Steed, Chad A; Fitzpatrick, Patrick J; Jankun-Kelly, T. J; Swan II, J. E
2008-01-01
... for a particular dependent variable. These capabilities are combined into a unique visualization system that is demonstrated via a North Atlantic hurricane climate study using a systematic workflow. This research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.
The analysis of multivariate group differences using common principal components
Bechger, T.M.; Blanca, M.J.; Maris, G.
2014-01-01
Although it is simple to determine whether multivariate group differences are statistically significant or not, such differences are often difficult to interpret. This article is about common principal components analysis as a tool for the exploratory investigation of multivariate group differences
Principal Component Analysis: Most Favourite Tool in Chemometrics
Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.
Scalable Robust Principal Component Analysis Using Grassmann Averages
Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi
2016-01-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...
Reliability Analysis of Fatigue Fracture of Wind Turbine Drivetrain Components
Berzonskis, Arvydas; Sørensen, John Dalsgaard
2016-01-01
in the volume of the casted ductile iron main shaft, on the reliability of the component. The probabilistic reliability analysis conducted is based on fracture mechanics models. Additionally, the utilization of the probabilistic reliability for operation and maintenance planning and quality control is discussed....
Principal component analysis of image gradient orientations for face recognition
Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data
Adaptive tools in virtual environments: Independent component analysis for multimedia
Kolenda, Thomas
2002-01-01
The thesis investigates the role of independent component analysis in the setting of virtual environments, with the purpose of finding properties that reflect human context. A general framework for performing unsupervised classification with ICA is presented in extension to the latent semantic in...... were compared to investigate computational differences and separation results. The ICA properties were finally implemented in a chat room analysis tool and briefly investigated for visualization of search engines results....
PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS
Kartika Gunadi
2001-01-01
Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang
Omar, Mohamed A
2014-01-01
Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.
Principal Component Analysis - A Powerful Tool in Computing Marketing Information
Constantin C.
2014-12-01
Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.
Experimental modal analysis of components of the LHC experiments
Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A
2007-01-01
Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.
Vibrational spectra and normal co-ordinate analysis of 2-aminopyridine and 2-amino picoline.
Jose, Sujin P; Mohan, S
2006-05-01
The Fourier transform infrared (FT-IR) and Raman (FT-R) spectra of 2-aminopyridine and 2-amino picoline were recorded and the observed frequencies were assigned to various modes of vibration in terms of fundamentals by assuming Cs point group symmetry. A normal co-ordinate analysis was also carried out for the proper assignment of the vibrational frequencies using simple valence force field. A complete vibrational analysis is presented here for the molecules and the results are briefly discussed.
APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES
STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.
1999-01-01
Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content
Sparse logistic principal components analysis for binary data
Lee, Seokho
2010-09-01
We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.
Components of Program for Analysis of Spectra and Their Testing
Ivan Taufer
2013-11-01
Full Text Available The spectral analysis of aqueous solutions of multi-component mixtures is used for identification and distinguishing of individual componentsin the mixture and subsequent determination of protonation constants and absorptivities of differently protonated particles in the solution in steadystate (Meloun and Havel 1985, (Leggett 1985. Apart from that also determined are the distribution diagrams, i.e. concentration proportions ofthe individual components at different pH values. The spectra are measured with various concentrations of the basic components (one or severalpolyvalent weak acids or bases and various pH values within the chosen range of wavelengths. The obtained absorbance response area has to beanalyzed by non-linear regression using specialized algorithms. These algorithms have to meet certain requirements concerning the possibility ofcalculations and the level of outputs. A typical example is the SQUAD(84 program, which was gradually modified and extended, see, e.g., (Melounet al. 1986, (Meloun et al. 2012.
Optimization benefits analysis in production process of fabrication components
Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.
2017-12-01
The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.
Multi-spectrometer calibration transfer based on independent component analysis.
Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong
2018-02-26
Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.
Monzón-Sandoval, Jimena; Castillo-Morales, Atahualpa; Crampton, Sean; McKelvey, Laura; Nolan, Aoife; O'Keeffe, Gerard; Gutierrez, Humberto
2015-01-01
During development, the nervous system (NS) is assembled and sculpted through a concerted series of neurodevelopmental events orchestrated by a complex genetic programme. While neural-specific gene expression plays a critical part in this process, in recent years, a number of immune-related signaling and regulatory components have also been shown to play key physiological roles in the developing and adult NS. While the involvement of individual immune-related signaling components in neural functions may reflect their ubiquitous character, it may also reflect a much wider, as yet undescribed, genetic network of immune-related molecules acting as an intrinsic component of the neural-specific regulatory machinery that ultimately shapes the NS. In order to gain insights into the scale and wider functional organization of immune-related genetic networks in the NS, we examined the large scale pattern of expression of these genes in the brain. Our results show a highly significant correlated expression and transcriptional clustering among immune-related genes in the developing and adult brain, and this correlation was the highest in the brain when compared to muscle, liver, kidney and endothelial cells. We experimentally tested the regulatory clustering of immune system (IS) genes by using microarray expression profiling in cultures of dissociated neurons stimulated with the pro-inflammatory cytokine TNF-alpha, and found a highly significant enrichment of immune system-related genes among the resulting differentially expressed genes. Our findings strongly suggest a coherent recruitment of entire immune-related genetic regulatory modules by the neural-specific genetic programme that shapes the NS.
Probabilistic structural analysis methods for select space propulsion system components
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Fetal source extraction from magnetocardiographic recordings by dependent component analysis
Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)
2005-10-07
Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Response spectrum analysis of coupled structural response to a three component seismic disturbance
Boulet, J.A.M.; Carley, T.G.
1977-01-01
The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting model equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method. The vectors of maximum system displacements as computed by the time history analysis and the five response spectrum analysis methods are presented. (Auth.)
Saccenti, E.; Camacho, J.
2015-01-01
Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several
Research on Air Quality Evaluation based on Principal Component Analysis
Wang, Xing; Wang, Zilin; Guo, Min; Chen, Wei; Zhang, Huan
2018-01-01
Economic growth has led to environmental capacity decline and the deterioration of air quality. Air quality evaluation as a fundamental of environmental monitoring and air pollution control has become increasingly important. Based on the principal component analysis (PCA), this paper evaluates the air quality of a large city in Beijing-Tianjin-Hebei Area in recent 10 years and identifies influencing factors, in order to provide reference to air quality management and air pollution control.
Efficient training of multilayer perceptrons using principal component analysis
Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael
2005-01-01
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior
Dynamic analysis of the radiolysis of binary component system
Katayama, M.; Trumbore, C.N.
1975-01-01
Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)
Identification of a cis-regulatory element by transient analysis of co-ordinately regulated genes
Allan Andrew C
2008-07-01
Full Text Available Abstract Background Transcription factors (TFs co-ordinately regulate target genes that are dispersed throughout the genome. This co-ordinate regulation is achieved, in part, through the interaction of transcription factors with conserved cis-regulatory motifs that are in close proximity to the target genes. While much is known about the families of transcription factors that regulate gene expression in plants, there are few well characterised cis-regulatory motifs. In Arabidopsis, over-expression of the MYB transcription factor PAP1 (PRODUCTION OF ANTHOCYANIN PIGMENT 1 leads to transgenic plants with elevated anthocyanin levels due to the co-ordinated up-regulation of genes in the anthocyanin biosynthetic pathway. In addition to the anthocyanin biosynthetic genes, there are a number of un-associated genes that also change in expression level. This may be a direct or indirect consequence of the over-expression of PAP1. Results Oligo array analysis of PAP1 over-expression Arabidopsis plants identified genes co-ordinately up-regulated in response to the elevated expression of this transcription factor. Transient assays on the promoter regions of 33 of these up-regulated genes identified eight promoter fragments that were transactivated by PAP1. Bioinformatic analysis on these promoters revealed a common cis-regulatory motif that we showed is required for PAP1 dependent transactivation. Conclusion Co-ordinated gene regulation by individual transcription factors is a complex collection of both direct and indirect effects. Transient transactivation assays provide a rapid method to identify direct target genes from indirect target genes. Bioinformatic analysis of the promoters of these direct target genes is able to locate motifs that are common to this sub-set of promoters, which is impossible to identify with the larger set of direct and indirect target genes. While this type of analysis does not prove a direct interaction between protein and DNA
A component analysis of positive behaviour support plans.
McClean, Brian; Grey, Ian
2012-09-01
Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.
Representation for dialect recognition using topographic independent component analysis
Wei, Qu
2004-10-01
In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.
Probabilistic methods in nuclear power plant component ageing analysis
Simola, K.
1992-03-01
The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience
Development of component failure data for seismic risk analysis
Fray, R.R.; Moulia, T.A.
1981-01-01
This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)
Kellett, M.A.
2009-12-01
The third meeting of the Co-ordinated Research Project on 'Reference Database for Neutron Activation Analysis' was held at the IAEA, Vienna from 17-19 November 2008. A summary of presentations made by participants is given, reports on specific tasks and subsequent discussions. With the aim of finalising the work of this CRP and in order to meet initial objectives, outputs were discussed and detailed task assignments agreed upon. (author)
Dynamic Analysis of Offshore Oil Pipe Installation Using the Absolute Nodal Coordinate Formulation
Nielsen, Jimmy D; Madsen, Søren B; Hyldahl, Per Christian
2013-01-01
The Absolute Nodal Coordinate Formulation (ANCF) has shown promising results in dynamic analysis of structures that undergo large deformation. The method relaxes the assumption of infinitesimal rotations. Being based in a fixed inertial reference frame leads to a constant mass matrix and zero......, are included to mimic the external forces acting on the pipe during installation. The scope of this investigation is to demonstrate the ability using the ANCF to analyze the dynamic behavior of an offshore oil pipe during installation...
Integrative sparse principal component analysis of gene expression data.
Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge
2017-12-01
In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.
Source Signals Separation and Reconstruction Following Principal Component Analysis
WANG Cheng
2014-02-01
Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.
Prestudy - Development of trend analysis of component failure
Poern, K.
1995-04-01
The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs
Aeromagnetic Compensation Algorithm Based on Principal Component Analysis
Peilin Wu
2018-01-01
Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.
Fast principal component analysis for stacking seismic data
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Demixed principal component analysis of neural population data.
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-04-12
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.
Nicolle F. Som
2017-06-01
Full Text Available Streptomyces bacteria make numerous secondary metabolites, including half of all known antibiotics. Production of antibiotics is usually coordinated with the onset of sporulation but the cross regulation of these processes is not fully understood. This is important because most Streptomyces antibiotics are produced at low levels or not at all under laboratory conditions and this makes large scale production of these compounds very challenging. Here, we characterize the highly conserved actinobacterial two-component system MtrAB in the model organism Streptomyces venezuelae and provide evidence that it coordinates production of the antibiotic chloramphenicol with sporulation. MtrAB are known to coordinate DNA replication and cell division in Mycobacterium tuberculosis where TB-MtrA is essential for viability but MtrB is dispensable. We deleted mtrB in S. venezuelae and this resulted in a global shift in the metabolome, including constitutive, higher-level production of chloramphenicol. We found that chloramphenicol is detectable in the wild-type strain, but only at very low levels and only after it has sporulated. ChIP-seq showed that MtrA binds upstream of DNA replication and cell division genes and genes required for chloramphenicol production. dnaA, dnaN, oriC, and wblE (whiB1 are DNA binding targets for MtrA in both M. tuberculosis and S. venezuelae. Intriguingly, over-expression of TB-MtrA and gain of function TB- and Sv-MtrA proteins in S. venezuelae also switched on higher-level production of chloramphenicol. Given the conservation of MtrAB, these constructs might be useful tools for manipulating antibiotic production in other filamentous actinomycetes.
Multigroup Moderation Test in Generalized Structured Component Analysis
Angga Dwi Mulyanto
2016-05-01
Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.
Determination of the optimal number of components in independent components analysis.
Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N
2018-03-01
Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Robustness analysis of bogie suspension components Pareto optimised values
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Sparse principal component analysis in medical shape modeling
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Principal component analysis of FDG PET in amnestic MCI
Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido; Salmaso, Dario; Morbelli, Silvia; Piccardo, Arnoldo; Larsson, Stig A.; Pagani, Marco
2008-01-01
The purpose of the study is to evaluate the combined accuracy of episodic memory performance and 18 F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). 18 F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and 18 F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)
Principal component analysis of FDG PET in amnestic MCI
Nobili, Flavio; Girtler, Nicola; Brugnolo, Andrea; Dessi, Barbara; Rodriguez, Guido [University of Genoa, Clinical Neurophysiology, Department of Endocrinological and Medical Sciences, Genoa (Italy); S. Martino Hospital, Alzheimer Evaluation Unit, Genoa (Italy); S. Martino Hospital, Head-Neck Department, Genoa (Italy); Salmaso, Dario [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Morbelli, Silvia [University of Genoa, Nuclear Medicine Unit, Department of Internal Medicine, Genoa (Italy); Piccardo, Arnoldo [Galliera Hospital, Nuclear Medicine Unit, Department of Imaging Diagnostics, Genoa (Italy); Larsson, Stig A. [Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden); Pagani, Marco [CNR, Institute of Cognitive Sciences and Technologies, Rome (Italy); CNR, Institute of Cognitive Sciences and Technologies, Padua (Italy); Karolinska Hospital, Department of Nuclear Medicine, Stockholm (Sweden)
2008-12-15
The purpose of the study is to evaluate the combined accuracy of episodic memory performance and {sup 18}F-FDG PET in identifying patients with amnestic mild cognitive impairment (aMCI) converting to Alzheimer's disease (AD), aMCI non-converters, and controls. Thirty-three patients with aMCI and 15 controls (CTR) were followed up for a mean of 21 months. Eleven patients developed AD (MCI/AD) and 22 remained with aMCI (MCI/MCI). {sup 18}F-FDG PET volumetric regions of interest underwent principal component analysis (PCA) that identified 12 principal components (PC), expressed by coarse component scores (CCS). Discriminant analysis was performed using the significant PCs and episodic memory scores. PCA highlighted relative hypometabolism in PC5, including bilateral posterior cingulate and left temporal pole, and in PC7, including the bilateral orbitofrontal cortex, both in MCI/MCI and MCI/AD vs CTR. PC5 itself plus PC12, including the left lateral frontal cortex (LFC: BAs 44, 45, 46, 47), were significantly different between MCI/AD and MCI/MCI. By a three-group discriminant analysis, CTR were more accurately identified by PET-CCS + delayed recall score (100%), MCI/MCI by PET-CCS + either immediate or delayed recall scores (91%), while MCI/AD was identified by PET-CCS alone (82%). PET increased by 25% the correct allocations achieved by memory scores, while memory scores increased by 15% the correct allocations achieved by PET. Combining memory performance and {sup 18}F-FDG PET yielded a higher accuracy than each single tool in identifying CTR and MCI/MCI. The PC containing bilateral posterior cingulate and left temporal pole was the hallmark of MCI/MCI patients, while the PC including the left LFC was the hallmark of conversion to AD. (orig.)
Stoney, David A; Stoney, Paul L
2015-08-01
An effective trace evidence capability is defined as one that exploits all useful particle types, chooses appropriate technologies to do so, and directly integrates the findings with case-specific problems. Limitations of current approaches inhibit the attainment of an effective capability and it has been strongly argued that a new approach to trace evidence analysis is essential. A hypothetical case example is presented to illustrate and analyze how forensic particle analysis can be used as a powerful practical tool in forensic investigations. The specifics in this example, including the casework investigation, laboratory analyses, and close professional interactions, provide focal points for subsequent analysis of how this outcome can be achieved. This leads to the specification of five key elements that are deemed necessary and sufficient for effective forensic particle analysis: (1) a dynamic forensic analytical approach, (2) concise and efficient protocols addressing particle combinations, (3) multidisciplinary capabilities of analysis and interpretation, (4) readily accessible external specialist resources, and (5) information integration and communication. A coordinating role, absent in current approaches to trace evidence analysis, is essential to achieving these elements. However, the level of expertise required for the coordinating role is readily attainable. Some additional laboratory protocols are also essential. However, none of these has greater staffing requirements than those routinely met by existing forensic trace evidence practitioners. The major challenges that remain are organizational acceptance, planning and implementation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Nuclear analysis techniques as a component of thermoluminescence dating
Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)
1996-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Principal Component Analysis Based Measure of Structural Holes
Deng, Shiguo; Zhang, Wenqing; Yang, Huijie
2013-02-01
Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.
Fast and accurate methods of independent component analysis: A survey
Tichavský, Petr; Koldovský, Zbyněk
2011-01-01
Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf
PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS
Hermita Bus Umar
2009-03-01
Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.
Fetal ECG extraction using independent component analysis by Jade approach
Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian
2017-11-01
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
Nuclear analysis techniques as a component of thermoluminescence dating
Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)
1997-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Nonlinear Principal Component Analysis Using Strong Tracking Filter
无
2007-01-01
The paper analyzes the problem of blind source separation (BSS) based on the nonlinear principal component analysis (NPCA) criterion. An adaptive strong tracking filter (STF) based algorithm was developed, which is immune to system model mismatches. Simulations demonstrate that the algorithm converges quickly and has satisfactory steady-state accuracy. The Kalman filtering algorithm and the recursive leastsquares type algorithm are shown to be special cases of the STF algorithm. Since the forgetting factor is adaptively updated by adjustment of the Kalman gain, the STF scheme provides more powerful tracking capability than the Kalman filtering algorithm and recursive least-squares algorithm.
Structure analysis of active components of traditional Chinese medicines
Zhang, Wei; Sun, Qinglei; Liu, Jianhua
2013-01-01
Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... herbal sources by aqueous or alcoholic solvents. Therefore, it is difficult to correlate the pharmaceutical effect to a specific lead compound in the TCMs. A detailed analysis of various components in TCMs has been a great challenge for modern analytical techniques in recent decades. In this chapter...
Advances in independent component analysis and learning machines
Bingham, Ella; Laaksonen, Jorma; Lampinen, Jouko
2015-01-01
In honour of Professor Erkki Oja, one of the pioneers of Independent Component Analysis (ICA), this book reviews key advances in the theory and application of ICA, as well as its influence on signal processing, pattern recognition, machine learning, and data mining. Examples of topics which have developed from the advances of ICA, which are covered in the book are: A unifying probabilistic model for PCA and ICA Optimization methods for matrix decompositions Insights into the FastICA algorithmUnsupervised deep learning Machine vision and image retrieval A review of developments in the t
Vickridge, I.; Schwerer, O.
2006-01-01
A summary is given of the First Research Coordination Meeting on the Development of a Reference Database for Ion Beam Analysis, including background information, objectives, recommendations for measurements, and a list of tasks assigned to participants. The next research co-ordination meeting will be held in May 2007. (author)
Development of motion image prediction method using principal component analysis
Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma
2012-01-01
Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)
Non-conformal contact mechanical characteristic analysis on spherical components
Zhen-zhi, G.; Bin, H.; Zheng-ming, G.; Feng-mei, Y.; Jin, Q [The 2. Artillery Engineering Univ., Xi' an (China)
2017-03-15
Non-conformal spherical-contact mechanical problems is a three-dimensional coordination or similar to the coordination spherical contact. Due to the complexity of the problem of spherical-contact and difficulties of solving higher-order partial differential equations, problems of three-dimensional coordination or similar to the coordination spherical-contact is still no exact analytical method for solving. It is based on three-dimensional taper model is proposed a model based on the contour surface of the spherical contact and concluded of the formula of the contact pressure and constructed of finite element model by contact pressure distribution under the non-conformal spherical. The results shows spherical contact model can reflect non-conformal spherical-contacting mechanical problems more than taper-contacting model, and apply for the actual project.
Fast grasping of unknown objects using principal component analysis
Lei, Qujiang; Chen, Guangming; Wisse, Martijn
2017-09-01
Fast grasping of unknown objects has crucial impact on the efficiency of robot manipulation especially subjected to unfamiliar environments. In order to accelerate grasping speed of unknown objects, principal component analysis is utilized to direct the grasping process. In particular, a single-view partial point cloud is constructed and grasp candidates are allocated along the principal axis. Force balance optimization is employed to analyze possible graspable areas. The obtained graspable area with the minimal resultant force is the best zone for the final grasping execution. It is shown that an unknown object can be more quickly grasped provided that the component analysis principle axis is determined using single-view partial point cloud. To cope with the grasp uncertainty, robot motion is assisted to obtain a new viewpoint. Virtual exploration and experimental tests are carried out to verify this fast gasping algorithm. Both simulation and experimental tests demonstrated excellent performances based on the results of grasping a series of unknown objects. To minimize the grasping uncertainty, the merits of the robot hardware with two 3D cameras can be utilized to suffice the partial point cloud. As a result of utilizing the robot hardware, the grasping reliance is highly enhanced. Therefore, this research demonstrates practical significance for increasing grasping speed and thus increasing robot efficiency under unpredictable environments.
Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis.
Deng, Xiaogang; Tian, Xuemin; Chen, Sheng; Harris, Chris J
2018-03-01
Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.
Independent component analysis classification of laser induced breakdown spectroscopy spectra
Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie
2013-01-01
The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes
Zeng, J.; Li, G.; Sun, J.
2013-01-01
Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)
Benson, Alexandra S.; Elinski, Meagan B.; Ohnsorg, Monica L.; Beaudoin, Christopher K.; Alexander, Kyle A.; Peaslee, Graham F.; DeYoung, Paul A.; Anderson, Mary E., E-mail: meanderson@hope.edu
2015-09-01
Metal–organic coordinated multilayers are self-assembled thin films fabricated by alternating solution–phase deposition of bifunctional organic molecules and metal ions. The multilayer film composed of α,ω-mercaptoalkanoic acid and Cu (II) has been the focus of fundamental and applied research with its robust reproducibility and seemingly simple hierarchical architecture. However, internal structure and composition have not been unambiguously established. The composition of films up to thirty layers thick was investigated using Rutherford backscattering spectrometry and particle induced X-ray emission. Findings show these films are copper enriched, elucidating a 2:1 ratio for the ion to molecule complexation at the metal–organic interface. Results also reveal that these films have an average layer density similar to literature values established for a self-assembled monolayer, indicating a robust and stable structure. The surface structures of multilayer films have been characterized by contact angle goniometry, ellipsometry, and scanning probe microscopy. A morphological transition is observed as film thickness increases from the first few foundational layers to films containing five or more layers. Surface roughness analysis quantifies this evolution as the film initially increases in roughness before obtaining a lower roughness comparable to the underlying gold substrate. Quantitative analysis of topographical structure and internal composition for metal–organic coordinated multilayers as a function of number of deposited layers has implications for their incorporation in the fields of photonics and nanolithography. - Highlights: • Layer-by-layer deposition is examined by scanning probe microscopy and ion beam analysis. • Film growth undergoes morphological evolution during foundational layer deposition. • Image analysis quantified surface features such as roughness, grain size, and coverage. • Molecular density of each film layer is found to
The application of principal component analysis to quantify technique in sports.
Federolf, P; Reid, R; Gilgien, M; Haugen, P; Smith, G
2014-06-01
Analyzing an athlete's "technique," sport scientists often focus on preselected variables that quantify important aspects of movement. In contrast, coaches and practitioners typically describe movements in terms of basic postures and movement components using subjective and qualitative features. A challenge for sport scientists is finding an appropriate quantitative methodology that incorporates the holistic perspective of human observers. Using alpine ski racing as an example, this study explores principal component analysis (PCA) as a mathematical method to decompose a complex movement pattern into its main movement components. Ski racing movements were recorded by determining the three-dimensional coordinates of 26 points on each skier which were subsequently interpreted as a 78-dimensional posture vector at each time point. PCA was then used to determine the mean posture and principal movements (PMk ) carried out by the athletes. The first four PMk contained 95.5 ± 0.5% of the variance in the posture vectors which quantified changes in body inclination, vertical or fore-aft movement of the trunk, and distance between skis. In summary, calculating PMk offered a data-driven, quantitative, and objective method of analyzing human movement that is similar to how human observers such as coaches or ski instructors would describe the movement. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Miller, Nicholas; MacDowell, Jason; Chmiel, Gary; Konopinski, Ryan; Gautam, Durga [GE Energy, Schenectady, NY (United States); Laughter, Grant; Hagen, Dave [PacifiCorp., Salt Lake City, UT (United States)
2012-07-01
At high levels of wind power penetration, multiple wind plants may be the predominant generation resource over large geographic areas. Thus, not only do wind plants need to provide a high level of functionality, they must coordinate properly with each other. This paper describes the analysis and field testing of wind plant voltage controllers designed to improve system voltage performance through passive coordination. The described wind power plant controls can coordinate the real and reactive power response of multiple wind turbines and thereby make the plant function as a single ''grid friendly'' power generation source. For this application, involving seven large wind plants with predominantly GE wind turbines in Eastern Wyoming, the voltage portion of the controllers were configured and tuned to allow the collective reactive power response of multiple wind plants in the region to work well together. This paper presents the results of the initial configuration and tuning study, and the results of the subsequent field tuning and testing of the modified controls. The paper also presents some comparisons of the measured field performance with the stability simulation models, which show that the available wind plant models provide accurate, high fidelity results for actual operating conditions of commercial wind power plants. (orig.)
Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.
Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E
2018-03-01
Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.
Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data
Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.
2017-01-01
Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564
Meltz, Bertrand
2015-01-01
This thesis deals with the mathematical and numerical analysis of the systems of compressible hydrodynamics and radiative transfer. More precisely, we study the derivation of numerical methods with 2D polar coordinates (one for the radius, one for the angle) where equations are discretized on regular polar grids. On one hand, these methods are well-suited for the simulation of flows with polar symmetries since they preserve these symmetries by construction. On the other hand, such coordinates systems introduce geometrical singularities as well as geometrical source terms which must be carefully treated. The first part of this document is devoted to the study of hydrodynamics equations, or Euler equations. We propose a new class of arbitrary high-order numerical schemes in both space and time and rely on directional splitting methods for the resolution of 2D equations. Each sub-system is solved using a Lagrange+Remap solver. We study the influence of the r=0 geometrical singularities of the cylindrical and spherical coordinates systems on the precision of the 2D numerical solutions. The second part of this document is devoted to the study of radiative transfer equations. In these equations, the unknowns depend on a large number of variables and a stiff source term is involved. The main difficulty consists in capturing the correct asymptotic behavior on coarse grids. We first construct a class of models where the radiative intensity is projected on a truncated spherical harmonics basis in order to lower the number of mathematical dimensions. Then we propose an Asymptotic Preserving scheme built in polar coordinates and we show that the scheme capture the correct diffusion limit in the radial direction as well as in the polar direction. (author) [fr
PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS
Correia, C.; Medeiros, J. R. De [Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, 59072-970, Natal (Brazil); Lazarian, A. [Astronomy Department, University of Wisconsin, Madison, 475 N. Charter St., WI 53711 (United States); Burkhart, B. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS-20, Cambridge, MA 02138 (United States); Pogosyan, D., E-mail: caioftc@dfte.ufrn.br [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON (Canada)
2016-02-20
In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information.
PRINCIPAL COMPONENT ANALYSIS STUDIES OF TURBULENCE IN OPTICALLY THICK GAS
Correia, C.; Medeiros, J. R. De; Lazarian, A.; Burkhart, B.; Pogosyan, D.
2016-01-01
In this work we investigate the sensitivity of principal component analysis (PCA) to the velocity power spectrum in high-opacity regimes of the interstellar medium (ISM). For our analysis we use synthetic position–position–velocity (PPV) cubes of fractional Brownian motion and magnetohydrodynamics (MHD) simulations, post-processed to include radiative transfer effects from CO. We find that PCA analysis is very different from the tools based on the traditional power spectrum of PPV data cubes. Our major finding is that PCA is also sensitive to the phase information of PPV cubes and this allows PCA to detect the changes of the underlying velocity and density spectra at high opacities, where the spectral analysis of the maps provides the universal −3 spectrum in accordance with the predictions of the Lazarian and Pogosyan theory. This makes PCA a potentially valuable tool for studies of turbulence at high opacities, provided that proper gauging of the PCA index is made. However, we found the latter to not be easy, as the PCA results change in an irregular way for data with high sonic Mach numbers. This is in contrast to synthetic Brownian noise data used for velocity and density fields that show monotonic PCA behavior. We attribute this difference to the PCA's sensitivity to Fourier phase information
Coupling between the solar wind and the magnetosphere - CDAW 6. [Coordinated Data Analysis Workshop
Tsurutani, B. T.; Slavin, J. A.; Kamide, Y.; Zwickl, R. D.; King, J. H.; Russell, C. T.
1985-01-01
It is pointed out that an extensive study of the causes and manifestations of geomagnetic activity has been carried out as part of the sixth Coordinated Data Analysis Workshop, CDAW 6. The present investigation has the objective to determine the coupling between the solar wind and the magnetosphere for the two selected analysis intervals, taking into account, as a basis for the study, the interplanetary field and plasma observations from ISEE 3 and IMP 8 and the geomagnetic activity indicators developed by CDAW 6 participants. The method of analysis employed is discussed, giving attention to geomagnetic indices, upstream parameters, and a cross-correlation analysis. In a description of the obtained results, the March 22, 1979 event is considered along with the March 31 to April 1, 1979 event, and an intercomparison of the events. The relationship between interplanetary indices and the resulting geomagnetic activity for the two CDAW 6 intervals is illustrated.
Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H. [Research Reactor Section, Vianna (Australia); Shokr, A. [Research Reactor Safety Section, Vienna (Australia); Zeman, A. [International Atomic Energy Agency, Vienna (Australia)
2013-07-01
Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from
Borio Di Tigliole, A.; Schaaf, Van Der; Barnea, Y.; Bradley, E.; Morris, C.; Rao, D. V. H.; Shokr, A.; Zeman, A.
2013-01-01
Today more than 50% of operating Research Reactors (RRs) are over 45 years old. Thus, ageing management is one of the most important issues to face in order to ensure availability (including life extension), reliability and safe operation of these facilities for the future. Management of the ageing process requires, amongst others, the predictions for the behavior of structural materials of primary components subjected to irradiation such as reactor vessel and core support structures, many of which are extremely difficult or impossible to replace. In fact, age-related material degradation mechanisms resulted in high profile, unplanned and lengthy shutdowns and unique regulatory processes of relicensing the facilities in recent years. These could likely have been prevented by utilizing available data for the implementation of appropriate maintenance and surveillance programmes. This IAEA Coordinated Research Project (CRP) will provide an international forum to establish a material properties Database for irradiated core structural materials and components. It is expected that this Database will be used by research reactor operators and regulators to help predict ageing related degradation. This would be useful to minimize unpredicted outages due to ageing processes of primary components and to mitigate lengthy and costly shutdowns. The Database will be a compilation of data from RRs operators' inputs, comprehensive literature reviews and experimental data from RRs. Moreover, the CRP will specify further activities needed to be addressed in order to bridge the gaps in the new created Database, for potential follow-on activities. As per today, 13 Member States (MS) confirmed their agreement to contribute to the development of the Database, covering a wide number of materials and properties. The present publication incorporates two parts: the first part includes details on the pre-CRP Questionnaire, including the conclusions drawn from the answers received from the MS
Detecting coordinated regulation of multi-protein complexes using logic analysis of gene expression
Yeates Todd O
2009-12-01
Full Text Available Abstract Background Many of the functional units in cells are multi-protein complexes such as RNA polymerase, the ribosome, and the proteasome. For such units to work together, one might expect a high level of regulation to enable co-appearance or repression of sets of complexes at the required time. However, this type of coordinated regulation between whole complexes is difficult to detect by existing methods for analyzing mRNA co-expression. We propose a new methodology that is able to detect such higher order relationships. Results We detect coordinated regulation of multiple protein complexes using logic analysis of gene expression data. Specifically, we identify gene triplets composed of genes whose expression profiles are found to be related by various types of logic functions. In order to focus on complexes, we associate the members of a gene triplet with the distinct protein complexes to which they belong. In this way, we identify complexes related by specific kinds of regulatory relationships. For example, we may find that the transcription of complex C is increased only if the transcription of both complex A AND complex B is repressed. We identify hundreds of examples of coordinated regulation among complexes under various stress conditions. Many of these examples involve the ribosome. Some of our examples have been previously identified in the literature, while others are novel. One notable example is the relationship between the transcription of the ribosome, RNA polymerase and mannosyltransferase II, which is involved in N-linked glycan processing in the Golgi. Conclusions The analysis proposed here focuses on relationships among triplets of genes that are not evident when genes are examined in a pairwise fashion as in typical clustering methods. By grouping gene triplets, we are able to decipher coordinated regulation among sets of three complexes. Moreover, using all triplets that involve coordinated regulation with the ribosome
Failure cause analysis and improvement for magnetic component cabinet
Ge Bing
1999-01-01
The magnetic component cabinet is an important thermal control device fitted on the nuclear power. Because it used a self-saturation amplifier as a primary component, the magnetic component cabinet has some boundness. For increasing the operation safety on the nuclear power, the author describes a new scheme. In order that the magnetic component cabinet can be replaced, the new type component cabinet is developed. Integrate circuit will replace the magnetic components of every function parts. The author has analyzed overall failure cause for magnetic component cabinet and adopted some measures
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Multi-component controllers in reactor physics optimality analysis
Aldemir, T.
1978-01-01
An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Autonomous learning in gesture recognition by using lobe component analysis
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Improvement of retinal blood vessel detection using morphological component analysis.
Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza
2015-03-01
Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Analysis of tangible and intangible hotel service quality components
Marić Dražen
2016-01-01
Full Text Available The issue of service quality is one of the essential areas of marketing theory and practice, as high quality can lead to customer satisfaction and loyalty, i.e. successful business results. It is vital for any company, especially in services sector, to understand and grasp the consumers' expectations and perceptions pertaining to the broad range of factors affecting consumers' evaluation of services, their satisfaction and loyalty. Hospitality is a service sector where the significance of these elements grows exponentially. The aim of this study is to identify the significance of individual quality components in hospitality industry. The questionnaire used for gathering data comprised 19 tangible and 14 intangible attributes of service quality, which the respondents rated on a five-degree scale. The analysis also identified the factorial structure of the tangible and intangible elements of hotel service. The paper aims to contribute to the existing literature by pointing to the significance of tangible and intangible components of service quality. A very small number of studies conducted in hospitality and hotel management identify the sub-factors within these two dimensions of service quality. The paper also provides useful managerial implications. The obtained results help managers in hospitality to establish the service offers that consumers find the most important when choosing a given hotel.
Analysis of European Union Economy in Terms of GDP Components
Simona VINEREAN
2013-12-01
Full Text Available The impact of the crises on national economies represented a subject of analysis and interest for a wide variety of research studies. Thus, starting from the GDP composition, the present research exhibits an analysis of the impact of European economies, at an EU level, of the events that followed the crisis of 2007 – 2008. Firstly, the research highlighted the existence of two groups of countries in 2012 in European Union, namely segments that were compiled in relation to the structure of the GDP’s components. In the second stage of the research, a factor analysis was performed on the resulted segments, that showed that the economies of cluster A are based more on personal consumption compared to the economies of cluster B, and in terms of government consumption, the situation is reversed. Thus, between the two groups of countries, a different approach regarding the role of fiscal policy in the economy can be noted, with a greater emphasis on savings in cluster B. Moreover, besides the two groups of countries resulted, Ireland and Luxembourg stood out because these two countries did not fit in either of the resulted segments and their economies are based, to a large extent, on the positive balance of the external balance.
Principal component analysis of 1/fα noise
Gao, J.B.; Cao Yinhe; Lee, J.-M.
2003-01-01
Principal component analysis (PCA) is a popular data analysis method. One of the motivations for using PCA in practice is to reduce the dimension of the original data by projecting the raw data onto a few dominant eigenvectors with large variance (energy). Due to the ubiquity of 1/f α noise in science and engineering, in this Letter we study the prototypical stochastic model for 1/f α processes--the fractional Brownian motion (fBm) processes using PCA, and find that the eigenvalues from PCA of fBm processes follow a power-law, with the exponent being the key parameter defining the fBm processes. We also study random-walk-type processes constructed from DNA sequences, and find that the eigenvalue spectrum from PCA of those random-walk processes also follow power-law relations, with the exponent characterizing the correlation structures of the DNA sequence. In fact, it is observed that PCA can automatically remove linear trends induced by patchiness in the DNA sequence, hence, PCA has a similar capability to the detrended fluctuation analysis. Implications of the power-law distributed eigenvalue spectrum are discussed
Surface composition of biomedical components by ion beam analysis
Kenny, M.J.; Wielunski, L.S.; Baxter, G.R.
1991-01-01
Materials used for replacement body parts must satisfy a number of requirements such as biocompatibility and mechanical ability to handle the task with regard to strength, wear and durability. When using a CVD coated carbon fibre reinforced carbon ball, the surface must be ion implanted with uniform dose of nitrogen ions in order to make it wear resistant. The mechanism by which the wear resistance is improved is one of radiation damage and the required dose of about 10 16 cm -2 can have a tolerance of about 20%. To implant a spherical surface requires manipulation of the sample within the beam and control system (either computer or manually operated) to enable uniform dose all the way from polar to equatorial regions on the surface. A manipulator has been designed and built for this purpose. In order to establish whether the dose is uniform, nuclear reaction analysis using the reaction 14 N(d,α) 12 C is an ideal method of profiling. By taking measurements at a number of points on the surface, the uniformity of nitrogen dose can be ascertained. It is concluded that both Rutherford Backscattering and Nuclear Reaction Analysis can be used for rapid analysis of surface composition of carbon based materials used for replacement body components. 2 refs., 2 figs
Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation
Deniz Erdogmus
2004-10-01
Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.
Preliminary study of soil permeability properties using principal component analysis
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.
Iris recognition based on robust principal component analysis
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Size distribution measurements and chemical analysis of aerosol components
Pakkanen, T.A.
1995-12-31
The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted
Variational Bayesian Learning for Wavelet Independent Component Analysis
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
A Principal Component Analysis of 39 Scientific Impact Measures
Bollen, Johan; Van de Sompel, Herbert
2009-01-01
Background The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. Methodology We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Conclusions Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution. PMID:19562078
Analysis of contaminants on electronic components by reflectance FTIR spectroscopy
Griffith, G.W.
1982-09-01
The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included
Cnn Based Retinal Image Upscaling Using Zero Component Analysis
Nasonov, A.; Chesnakov, K.; Krylov, A.
2017-05-01
The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.
A principal component analysis of 39 scientific impact measures.
Johan Bollen
Full Text Available BACKGROUND: The impact of scientific publications has traditionally been expressed in terms of citation counts. However, scientific activity has moved online over the past decade. To better capture scientific impact in the digital era, a variety of new impact measures has been proposed on the basis of social network analysis and usage log data. Here we investigate how these new measures relate to each other, and how accurately and completely they express scientific impact. METHODOLOGY: We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. CONCLUSIONS: Our results indicate that the notion of scientific impact is a multi-dimensional construct that can not be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution.
A survival analysis on critical components of nuclear power plants
Durbec, V.; Pitner, P.; Riffard, T.
1995-06-01
Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs
Pi Ting; Zhang Yunqing; Chen Liping
2012-01-01
Design sensitivity analysis of flexible multibody systems is important in optimizing the performance of mechanical systems. The choice of coordinates to describe the motion of multibody systems has a great influence on the efficiency and accuracy of both the dynamic and sensitivity analysis. In the flexible multibody system dynamics, both the floating frame of reference formulation (FFRF) and absolute nodal coordinate formulation (ANCF) are frequently utilized to describe flexibility, however, only the former has been used in design sensitivity analysis. In this article, ANCF, which has been recently developed and focuses on modeling of beams and plates in large deformation problems, is extended into design sensitivity analysis of flexible multibody systems. The Motion equations of a constrained flexible multibody system are expressed as a set of index-3 differential algebraic equations (DAEs), in which the element elastic forces are defined using nonlinear strain-displacement relations. Both the direct differentiation method and adjoint variable method are performed to do sensitivity analysis and the related dynamic and sensitivity equations are integrated with HHT-I3 algorithm. In this paper, a new method to deduce system sensitivity equations is proposed. With this approach, the system sensitivity equations are constructed by assembling the element sensitivity equations with the help of invariant matrices, which results in the advantage that the complex symbolic differentiation of the dynamic equations is avoided when the flexible multibody system model is changed. Besides that, the dynamic and sensitivity equations formed with the proposed method can be efficiently integrated using HHT-I3 method, which makes the efficiency of the direct differentiation method comparable to that of the adjoint variable method when the number of design variables is not extremely large. All these improvements greatly enhance the application value of the direct differentiation
Response spectrum analysis of coupled structural response to a three component seismic disturbance
Boulet, J.A.M.; Carley, T.G.
1977-01-01
The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. The structural model includes five lumped masses (floors) connected by four elastic members. The base is supported by three translational springs and two horizontal torsional springs. In general, the mass center and shear center of a building floor are distinct locations. Hence, inertia forces, which act at the mass center, induce twisting in the structure. Through this induced torsion, the lateral (x and y) displacements of the mass elements are coupled. The ground motion components used for this study are artificial earthquake records generated from recorded accelerograms by a spectrum modification technique. The accelerograms have response spectra which are compatible with U.S. NRC Regulatory Guide 1.60. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting modal equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method
Wenxia Zhai
2017-06-01
Full Text Available The relationship between transportation infrastructure investment and regional economic growth has been the focus of domestic and foreign academic research. Using the models of coupling degree and coupling coordination degree, this paper calculated the coupling degree and coupling coordination degree between the comprehensive level of transportation infrastructure investment and economic development in Hubei province and its 17 cities, and analyzed its temporal and spatial characteristics. The result showed that, from 2001 to 2013, the coupling and coupling coordination between transportation infrastructure investment and economic development in Hubei province were on a steady rise in the time sequence characteristics. It experienced the upgrade from the uncoordinated – nearly uncoordinated – barely coordinated – intermediately coordinated stages. In the year of 2013, the coupling and coupling coordination of transportation infrastructure investment and economic development in the 17 prefecture-level cities of Hubei Province showed a very uneven spatial difference. Good coordination, primary coordination, barely coordinate, and barely in-coordination are distributed in the province. The average coordination degree of the 17 prefecture-level cities in Hubei is relatively low, and there is a negative tend to expand the difference. This study has confirmed the relationship between transportation infrastructure investment and the economic development to be in an interactive coupling and coordination, but in different regions and different stages, the degree of coordination has obvious spatial and temporal differences.
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
Failure characteristic analysis of a component on standby state
Shin, Sungmin; Kang, Hyungook
2013-01-01
Periodic operations for a specific type of component, however, can accelerate aging effects which increase component unavailability. For the other type of components, the aging effect caused by operation can be ignored. Therefore frequent operations can decrease component unavailability. Thus, to get optimum unavailability proper operation period and method should be studied considering the failure characteristics of each component. The information of component failure is given according to the main causes of failure depending on time flow. However, to get the optimal unavailability, proper interval of operation for inspection should be decided considering the time dependent and independent causes together. According to this study, gradually shorter operation interval for inspection is better to get the optimal component unavailability than that of specific period
Lin, Blossom Yen-Ju; Lin, Yung-Kai; Lin, Cheng-Chieh
2010-01-01
Previous empirical and managerial studies have ignored the effectiveness of integrated health networks. It has been argued that the varying definitions and strategic imperatives of integrated organizations may have complicated the assessment of the outcomes/performance of varying models, particularly when their market structures and contexts differed. This study aimed to empirically verify a theoretical perspective on the coordination infrastructure designs and the effectiveness of the primary community care networks (PCCNs) formed and funded by the Bureau of National Health Insurance since March 2003. The PCCNs present a model to replace the traditional fragmented providers in Taiwan's health care. The study used a cross-sectional mailed survey designed to ascertain partnership coordination infrastructure and integration of governance, clinical care, bonding, finances, and information. The outcome indicators were PCCNs' perceived performance and willingness to remain within the network. Structural equation modeling examined the causal relationships, controlling for organizational and environmental factors. Primary data collection occurred from February through December 2005, via structured questionnaires sent to 172 PCCNs. Using the individual PCCN as the unit of analysis, the results found that a network's efforts regarding coordination infrastructures were positively related to the PCCN's perceived performance and willingness to remain within the network. In addition, PCCNs practicing in rural areas and in areas with higher density of medical resources had better perceived effectiveness and willingness to cooperate in the network.Practical Implication: The lack of both an operational definition and an information about system-wide integration may have obstructed understanding of integrated health networks' organizational dynamics. This study empirically examined individual PCCNs and offers new insights on how to improve networks' organizational design and
Letícia Carrillo Maronesi
2015-07-01
Full Text Available Introduction: Children’s motor skills evolve according to age and the continuing influence of intrinsic and extrinsic factors that cause variations from one child to another; this makes the course of development unique in each child. Objective: To develop an intervention for a child with delays in fine motor coordination, gross motor coordination and balance and analyze its impact on the child’s development. Methods: Pre- and post-test quasi-experimental design. The instrument used was the Motor Development Scale applied to a 4 year old child. An intervention plan was developed based on the results obtained throught the tests. The plan consists of activities designed to stimulate the aforementioned acquisitions. The implementation of the intervention plan lasted two months. The child was tested at the beginning and at the end of the intervention to determine whether there was gain in the stimulated acquisitions. The JT method was adopted for data analysis and verification of occurrence of reliable and clinically relevant positive changes. Results: The results of this study demonstrate that reliable positive changes occurred with respect to the psychomotor items that underwent stimulation. Conclusion: It is possible to infer that this intervention had a positive effect on the child’s development . Hence, this study contributes to improve the care provided to children with delayed psychomotor development, illustrating possibilities of strategies and activities. It also allows the recognition of the action of occupational therapists as one of the professionals who compose the multidisciplinary team focused on early intervention.
Zhou Yu; Yu Zuguo; Anh, Vo
2007-01-01
The 3-dimensional coordinates of alpha-carbon atoms of proteins are used to distinguish the protein structural classes based on recurrence quantification analysis (RQA). We consider two independent variables from RQA of coordinates of alpha-carbon atoms, %determ1 and %determ2, which were defined by Webber et al. [C.L. Webber Jr., A. Giuliani, J.P. Zbilut, A. Colosimo, Proteins Struct. Funct. Genet. 44 (2001) 292]. The variable %determ2 is used to define two new variables, %determ2 1 and %determ2 2 . Then three variables %determ1, %determ2 1 and %determ2 2 are used to construct a 3-dimensional variable space. Each protein is represented by a point in this variable space. The points corresponding to proteins from the α, β, α+β and α/β structural classes position into different areas in this variable space. In order to give a quantitative assessment of our clustering on the selected proteins, Fisher's discriminant algorithm is used. Numerical results indicate that the discriminant accuracies are very high and satisfactory
Coordinated analysis of age, sex, and education effects on change in MMSE scores.
Piccinin, Andrea M; Muniz-Terrera, Graciela; Clouston, Sean; Reynolds, Chandra A; Thorvaldsson, Valgeir; Deary, Ian J; Deeg, Dorly J H; Johansson, Boo; Mackinnon, Andrew; Spiro, Avron; Starr, John M; Skoog, Ingmar; Hofer, Scott M
2013-05-01
We describe and compare the expected performance trajectories of older adults on the Mini-Mental Status Examination (MMSE) across six independent studies from four countries in the context of a collaborative network of longitudinal studies of aging. A coordinated analysis approach is used to compare patterns of change conditional on sample composition differences related to age, sex, and education. Such coordination accelerates evaluation of particular hypotheses. In particular, we focus on the effect of educational attainment on cognitive decline. Regular and Tobit mixed models were fit to MMSE scores from each study separately. The effects of age, sex, and education were examined based on more than one centering point. Findings were relatively consistent across studies. On average, MMSE scores were lower for older individuals and declined over time. Education predicted MMSE score, but, with two exceptions, was not associated with decline in MMSE over time. A straightforward association between educational attainment and rate of cognitive decline was not supported. Thoughtful consideration is needed when synthesizing evidence across studies, as methodologies adopted and sample characteristics, such as educational attainment, invariably differ.
Dynamic exposure model analysis of continuous laser direct writing in Polar-coordinate
Zhang, Shan; Lv, Yingjun; Mao, Wenjie
2018-01-01
In order to exactly predict the continuous laser direct writing quality in Polar-coordinate, we take into consideration the effect of the photoresist absorbing beam energy, the Gaussian attribute of the writing beam and the dynamic exposure process, and establish a dynamic exposure model to describe the influence of the tangential velocity of the normal incident facular center and laser power on the line width and sidewall angle. Numerical simulation results indicate that while writing velocity remains unchanged, the line width and sidewall angle are all increased as the laser power increases; while laser power remains unchanged, the line width and sidewall angle are all decreased as the writing velocity increases; at the same time the line profile in the exposure section is asymmetry and the center of the line has tiny excursion toward the Polar-coordinate origin compared with the facular center. Then it is necessary to choose the right writing velocity and laser power to obtain the ideal line profile. The model makes up the shortcomings of traditional models that can only predict line width or estimate the profile of the writing line in the absence of photoresist absorption, and can be considered as an effect analysis method for optimizing the parameters of fabrication technique of laser direct writing.
Sensor Failure Detection of FASSIP System using Principal Component Analysis
Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina
2018-02-01
In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.
A meta-analysis of executive components of working memory.
Nee, Derek Evan; Brown, Joshua W; Askren, Mary K; Berman, Marc G; Demiralp, Emre; Krawitz, Adam; Jonides, John
2013-02-01
Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM "work." However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.
Principal Component Analysis of Process Datasets with Missing Values
Kristen A. Severson
2017-07-01
Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.
Finite element elastic-plastic analysis of LMFBR components
Levy, A.; Pifko, A.; Armen, H. Jr.
1978-01-01
The present effort involves the development of computationally efficient finite element methods for accurately predicting the isothermal elastic-plastic three-dimensional response of thick and thin shell structures subjected to mechanical and thermal loads. This work will be used as the basis for further development of analytical tools to be used to verify the structural integrity of liquid metal fast breeder reactor (LMFBR) components. The methods presented here have been implemented into the three-dimensional solid element module (HEX) of the Grumman PLANS finite element program. These methods include the use of optimal stress points as well as a variable number of stress points within an element. This allows monitoring the stress history at many points within an element and hence provides an accurate representation of the elastic-plastic boundary using a minimum number of degrees of freedom. Also included is an improved thermal stress analysis capability in which the temperature variation and corresponding thermal strain variation are represented by the same functional form as the displacement variation. Various problems are used to demonstrate these improved capabilities. (Auth.)
Qi, Yuan
2000-01-01
In this thesis, we propose two new machine learning schemes, a subband-based Independent Component Analysis scheme and a hybrid Independent Component Analysis/Support Vector Machine scheme, and apply...
Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C
2004-09-08
Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)
2016-09-15
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Coordinated clinical and financial analysis as a powerful tool to influence vendor pricing.
Logan, Catherine A; Wu, Roger Y; Mulley, Debra; Smith, Paul C; Schwaitzberg, Steven D
2010-01-01
As costs continue to outpace reimbursements, hospital administrators and clinicians face increasing pressure to justify new capital purchases. Massachusetts Health Care Reform has added further economic challenges for Disproportionate Share Hospitals (DSH), as resources formerly available to treat the uninsured have been redirected. In this challenging climate, many hospitals still lack a standardized process for technology planning and/or vendor negotiation. : The purpose of this study was to determine whether a simple, coordinated clinical and financial analysis of a technology, Endoscopic Carpal Tunnel Release (ECTR), is sufficient to impact vendor pricing at Cambridge Health Alliance (CHA), a disproportionate share hospital (DSH) in Cambridge, Massachusetts. This case study addressed the topic of technology adoption, a complex decision-making process every hospital administration faces. Taking note of other hospitals approaches to instill a strategic management culture, CHA combined a literature review on clinical outcomes and financial analysis on profitability. Clinical effectiveness was evaluated through a literature review. The financial analysis was based on a retrospective inquiry of fixed and variable costs, reimbursement rates, actual payer mix, and profitability of adopting ECTR over open carpal tunnel release at CHA. This clinical and financial analysis was then shared with the vendor. A literature review revealed that although there are short-term benefits to ECTR, there is little to no difference in long-term outcomes to justify a calculated incremental loss of $91.49 in revenue per case. Sharing this analysis with the vendor resulted in a 30% price reduction. A revised cost analysis demonstrated a $53.51 incremental gain in revenue per case. CHA has since elected to offer ECTR to its patients. Smaller hospital systems often have modest leverage in vendor negotiations. Our results suggest that the development of adoption criteria and an evidence
Trimming of mammalian transcriptional networks using network component analysis
Liao James C
2010-10-01
Full Text Available Abstract Background Network Component Analysis (NCA has been used to deduce the activities of transcription factors (TFs from gene expression data and the TF-gene binding relationship. However, the TF-gene interaction varies in different environmental conditions and tissues, but such information is rarely available and cannot be predicted simply by motif analysis. Thus, it is beneficial to identify key TF-gene interactions under the experimental condition based on transcriptome data. Such information would be useful in identifying key regulatory pathways and gene markers of TFs in further studies. Results We developed an algorithm to trim network connectivity such that the important regulatory interactions between the TFs and the genes were retained and the regulatory signals were deduced. Theoretical studies demonstrated that the regulatory signals were accurately reconstructed even in the case where only three independent transcriptome datasets were available. At least 80% of the main target genes were correctly predicted in the extreme condition of high noise level and small number of datasets. Our algorithm was tested with transcriptome data taken from mice under rapamycin treatment. The initial network topology from the literature contains 70 TFs, 778 genes, and 1423 edges between the TFs and genes. Our method retained 1074 edges (i.e. 75% of the original edge number and identified 17 TFs as being significantly perturbed under the experimental condition. Twelve of these TFs are involved in MAPK signaling or myeloid leukemia pathways defined in the KEGG database, or are known to physically interact with each other. Additionally, four of these TFs, which are Hif1a, Cebpb, Nfkb1, and Atf1, are known targets of rapamycin. Furthermore, the trimmed network was able to predict Eno1 as an important target of Hif1a; this key interaction could not be detected without trimming the regulatory network. Conclusions The advantage of our new algorithm
Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group
Lianbo Deng
2014-01-01
Full Text Available This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using a simulated annealing algorithm. In the algorithm, the supplier of each subsidiary was selected according to the evaluation result for independent procurement. Finally, the effect of different parameters on the coordinated procurement strategy was analysed. The results showed that the coordinated strategy can clearly save procurement costs; that the strategy appears to be more cooperative when the quality requirement is not stricter; and that the coordinated costs have a strong effect on the coordinated procurement strategy.
Analysis of Minor Component Segregation in Ternary Powder Mixtures
Asachi Maryam
2017-01-01
Full Text Available In many powder handling operations, inhomogeneity in powder mixtures caused by segregation could have significant adverse impact on the quality as well as economics of the production. Segregation of a minor component of a highly active substance could have serious deleterious effects, an example is the segregation of enzyme granules in detergent powders. In this study, the effects of particle properties and bulk cohesion on the segregation tendency of minor component are analysed. The minor component is made sticky while not adversely affecting the flowability of samples. The segregation extent is evaluated using image processing of the photographic records taken from the front face of the heap after the pouring process. The optimum average sieve cut size of components for which segregation could be reduced is reported. It is also shown that the extent of segregation is significantly reduced by applying a thin layer of liquid to the surfaces of minor component, promoting an ordered mixture.
Structure of the alexithymic brain: A parametric coordinate-based meta-analysis.
Xu, Pengfei; Opmeer, Esther M; van Tol, Marie-José; Goerlich, Katharina S; Aleman, André
2018-04-01
Alexithymia refers to deficiencies in identifying and expressing emotions. This might be related to changes in structural brain volumes, but its neuroanatomical basis remains uncertain as studies have shown heterogeneous findings. Therefore, we conducted a parametric coordinate-based meta-analysis. We identified seventeen structural neuroimaging studies (including a total of 2586 individuals with different levels of alexithymia) investigating the association between gray matter volume and alexithymia. Volumes of the left insula, left amygdala, orbital frontal cortex and striatum were consistently smaller in people with high levels of alexithymia. These areas are important for emotion perception and emotional experience. Smaller volumes in these areas might lead to deficiencies in appropriately identifying and expressing emotions. These findings provide the first quantitative integration of results pertaining to the structural neuroanatomical basis of alexithymia. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A. Ball
Overview From a technical perspective, CMS has been in “beam operation” state since 6th November. The detector is fully closed with all components operational and the magnetic field is normally at the nominal 3.8T. The UXC cavern is normally closed with the radiation veto set. Access to UXC is now only possible during downtimes of LHC. Such accesses must be carefully planned, documented and carried out in agreement with CMS Technical Coordination, Experimental Area Management, LHC programme coordination and the CCC. Material flow in and out of UXC is now strictly controlled. Access to USC remains possible at any time, although, for safety reasons, it is necessary to register with the shift crew in the control room before going down.It is obligatory for all material leaving UXC to pass through the underground buffer zone for RP scanning, database entry and appropriate labeling for traceability. Technical coordination (notably Stephane Bally and Christoph Schaefer), the shift crew and run ...
Coordinated Analysis 101: A Joint Training Session Sponsored by LPI and ARES/JSC
Draper, D. S.; Treiman, A. H.
2017-01-01
The Lunar and Planetary Institute (LPI) and the Astromaterials Research and Exploration Science (ARES) Division, part of the Exploration Integration and Science Directorate at NASA Johnson Space Center (JSC), co-sponsored a training session in November 2016 for four early-career scientists in the techniques of coordinated analysis. Coordinated analysis refers to the approach of systematically performing high-resolution and -precision analytical studies on astromaterials, particularly the very small particles typical of recent and near-future sample return missions such as Stardust, Hayabusa, Hayabusa2, and OSIRIS-REx. A series of successive analytical steps is chosen to be performed on the same particle, as opposed to separate subsections of a sample, in such a way that the initial steps do not compromise the results from later steps in the sequence. The data from the entire series can then be integrated for these individual specimens, revealing important in-sights obtainable no other way. ARES/JSC scientists have played a leading role in the development and application of this approach for many years. Because the coming years will bring new sample collections from these and other planned NASA and international exploration missions, it is timely to begin disseminating specialized techniques for the study of small and precious astromaterial samples. As part of the Cooperative Agreement between NASA and the LPI, this training workshop was intended as the first in a series of similar training exercises that the two organizations will jointly sponsor in the coming years. These workshops will span the range of analytical capabilities and sample types available at ARES/JSC in the Astromaterials Research and Astro-materials Acquisition and Curation Offices. Here we summarize the activities and participants in this initial training.
Manojlovic, D; Lenhardt, L; Milićević, B; Antonov, M; Miletic, V; Dramićanin, M D
2015-10-09
Colour changes in Gradia Direct™ composite after immersion in tea, coffee, red wine, Coca-Cola, Colgate mouthwash, and distilled water were evaluated using principal component analysis (PCA) and the CIELAB colour coordinates. The reflection spectra of the composites were used as input data for the PCA. The output data (scores and loadings) provided information about the magnitude and origin of the surface reflection changes after exposure to the staining solutions. The reflection spectra of the stained samples generally exhibited lower reflection in the blue spectral range, which was manifested in the lower content of the blue shade for the samples. Both analyses demonstrated the high staining abilities of tea, coffee, and red wine, which produced total colour changes of 4.31, 6.61, and 6.22, respectively, according to the CIELAB analysis. PCA revealed subtle changes in the reflection spectra of composites immersed in Coca-Cola, demonstrating Coca-Cola's ability to stain the composite to a small degree.
2013-11-01
At the request of the Member States, the IAEA coordinates research into subjects of common interest in the context of the peaceful application of nuclear technology. The coordinated research projects (CRPs) are intended to promote knowledge and technology transfer between Member States and are largely focused on subjects of prime interest to the international nuclear community. This report presents the results of a CRP carried out between 2002 and 2007 on the subject of swelling clays proposed for use as a component in the engineered barrier system (EBS) of the multibarrier concept for disposal of radioactive waste. In 2002, under the auspices of the IAEA, a number of Member States came together to form a Network of Centres of Excellence on Training in and Demonstration of Waste Disposal Technologies in Underground Research Facilities (URF Network). This network identified the general subject of the application of high swelling clays to seal repositories for radioactive waste, with specific emphasis on the isolation of high level radioactive waste from the biosphere, as being suitable for a CRP. Existing concepts for geological repositories for high level radioactive waste and spent nuclear fuel require the use of EBSs to ensure effective isolation of the radioactive waste. There are two major materials proposed for use in the EBS, swelling clay based materials and cementitious/concrete materials. These materials will be placed between the perimeter of the excavation and the waste container to fill the existing gap and ensure isolation of the waste within the canister (also referred to as a container in some EBS concepts) by supporting safety through retardation and confinement. Cementitious materials are industrially manufactured to consistent standards and are readily available in most locations and therefore their evaluation is of less value to Member States than that of swelling clays. There exists a considerable range of programme development regarding
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H
2007-01-01
Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present
Organizational Design Analysis of Fleet Readiness Center Southwest Components Department
Montes, Jose F
2007-01-01
.... The purpose of this MBA Project is to analyze the proposed organizational design elements of the FRCSW Components Department that resulted from the integration of the Naval Aviation Depot at North Island (NADEP N.I...
New approach to accuracy verification of 3D surface models: An analysis of point cloud coordinates.
Lee, Wan-Sun; Park, Jong-Kyoung; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul; Yu, Chin-Ho
2016-04-01
The precision of two types of surface digitization devices, i.e., a contact probe scanner and an optical scanner, and the trueness of two types of stone replicas, i.e., one without an imaging powder (SR/NP) and one with an imaging powder (SR/P), were evaluated using a computer-aided analysis. A master die was fabricated from stainless steel. Ten impressions were taken, and ten stone replicas were prepared from Type IV stone (Fujirock EP, GC, Leuven, Belgium). The precision of two types of scanners was analyzed using the root mean square (RMS), measurement error (ME), and limits of agreement (LoA) at each coordinate. The trueness of the stone replicas was evaluated using the total deviation. A Student's t-test was applied to compare the discrepancies between the CAD-reference-models of the master die (m-CRM) and point clouds for the two types of stone replicas (α=.05). The RMS values for the precision were 1.58, 1.28, and 0.98μm along the x-, y-, and z-axes in the contact probe scanner and 1.97, 1.32, and 1.33μm along the x-, y-, and z-axes in the optical scanner, respectively. A comparison with m-CRM revealed a trueness of 7.10μm for SR/NP and 8.65μm for SR/P. The precision at each coordinate (x-, y-, and z-axes) was revealed to be higher than the one assessed in the previous method (overall offset differences). A comparison between the m-CRM and 3D surface models of the stone replicas revealed a greater dimensional change in SR/P than in SR/NP. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Analysis of the frequency components of X-ray images
Matsuo, Satoru; Komizu, Mitsuru; Kida, Tetsuo; Noma, Kazuo; Hashimoto, Keiji; Onishi, Hideo; Masuda, Kazutaka
1997-01-01
We examined the relation between the frequency components of x-ray images of the chest and phalanges and their read sizes for digitizing. Images of the chest and phalanges were radiographed using three types of screens and films, and the noise images in background density were digitized with a drum scanner, changing the read sizes. The frequency components for these images were evaluated by converting them to the secondary Fourier to obtain the power spectrum and signal to noise ratio (SNR). After changing the cut-off frequency on the power spectrum to process a low pass filter, we also examined the frequency components of the images in relation to the normalized mean square error (NMSE) for the image converted to reverse Fourier and the original image. Results showed that the frequency components were 2.0 cycles/mm for the chest image and 6.0 cycles/mm for the phalanges. Therefore, it is necessary to collect data applying the read sizes of 200 μm and 50 μm for the chest and phalangeal images, respectively, in order to digitize these images without loss of their frequency components. (author)
2018-03-01
This publication is a compilation of the main results and findings of an IAEA coordinated research project (CRP). In particular, it discusses an innovative variation of neutron activation analysis (NAA) known as large sample NAA (LSNAA). There is no other way to measure the bulk mass fractions of the elements present in a large sample (up to kilograms in mass) non-destructively. Examples amenable to LSNAA include irregularly shaped archaeological artefacts, excavated rock samples, large samples of assorted ore, and finished products, such as nuclear reactor components. The CRP focused primarily on the application of LSNAA in the areas of archaeology and geology; however it was also open for further exploration in other areas such as industry and life sciences as well as in basic research. The CRP contributed to establish the validation of the methodology, and, in particular, it provided an opportunity for developing trained manpower. The specific objectives of this CRP were to: i) Validate and optimize the experimental procedures for LSNAA applications in archaeology and geology; ii) Identify the needs for development or upgrade of the neutron irradiation facility for irradiation of large samples; iii) Develop and standardize data acquisition and data analysis systems; iv) Harmonize and standardize data collection from facilities with similar kind of instrumentation for further analysis and benchmarking. Advantages of LSNAA applications, limitations and scientific and technological requirements are described in this publication, which serves as a reference of interest not only to the NAA experts, research reactor personnel, and those considering this technique, but also to various stakeholders and users such as researchers, industrialists, environmental and legal experts, and administrators.
Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools
Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A
2001-01-01
The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...
Numerical analysis of residual stresses reconstruction for axisymmetric glass components
Tao, Bo; Xu, Shuang; Yao, Honghui
2018-01-01
A non-destructive measurement method for 3D stress state in a glass cylinder using photoelasticity has been analyzed by simulation in this research. Based on simulated stresses in a glass cylinder, intensity of the cylinder in a circular polariscope can be calculated by Jones calculus. Therefore, the isoclinic angle and optical retardation can be obtained by six steps phase shifting technique. Through the isoclinic angle and optical retardation, the magnitude and distribution of residual stresses inside the glass cylinder in cylindrical coordinate system can be reconstructed. Comparing the reconstructed stresses with numerical simulated stresses, the results verify this non-destructive method can be used to reconstruct the 3D stresses. However, there are some mismatches in axial stress, radial stress and circumferential stress.
Stefania Salvatore
2016-07-01
Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
Independent component analysis of edge information for face recognition
Karande, Kailash Jagannath
2013-01-01
The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos
Eliminating the Influence of Harmonic Components in Operational Modal Analysis
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2007-01-01
structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. These forces are seen as harmonic components in the responses, and their influence should be eliminated before extracting the modes in their vicinity. This paper describes a new method based...... on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic...
Moyon, N. Shaemningwar; Gashnga, Pynsakhiat Miki; Phukan, Smritakshi; Mitra, Sivaprasad, E-mail: smitra@nehu.ac.in
2013-06-27
Highlights: • Correlation of lumazine photophysics with multiparametric Kamlet–Taft equation. • Solvent basicity (β) contributes maximum towards the hydrogen bonding (HB) effect. • HB interaction occurs at N1 and N3 proton in S{sub 0} and S{sub 1} state, respectively. • IRC calculation for different tautomerization processes both in S{sub 0} and S{sub 1} states. • Process related to riboflavin biosynthesis is thermodynamically feasible. - Abstract: The photophysical properties and tautomerization behavior of neutral lumazine were studied by fluorescence spectroscopy and density functional theory calculation. A quantitative estimation of the contributions from different solvatochromic parameters, like solvent polarizibility (π{sup ∗}), hydrogen bond donation (α) and hydrogen bond accepting (β) ability of the solvent, was made using linear free energy relationships based on the Kamlet–Taft equation. The analysis reveals that the hydrogen bond acceptance ability of the solvent is the most important parameter characterizing the excited state behavior of lumazine. Theoretical calculations result predict an extensive charge redistribution of lumazine upon excitation corresponding to the N3 and N1 proton dissociation sites by solvents in the ground and excited states, respectively. Comparison of S{sub 0} and S{sub 1} state potential energy curves constructed for several water mediated tautomerization processes by intrinsic reaction coordinate analysis of lumazine-H{sub 2}O cluster shows that (3,2) and (1,8) hydrogen migrations are the most favorable processes upon excitation.
Okano, Yasushi; Ohira, Hiroaki
1998-08-01
In the early stage of sodium leak event of liquid metal fast breeder reactor, LMFBR, liquid sodium flows out from a piping, and ignition and combustion of liquid sodium droplet might occur under certain environmental condition. Compressible forced air flow, diffusion of chemical species, liquid sodium droplet behavior, chemical reactions and thermodynamic properties should be evaluated with considering physical dependence and numerical connection among them for analyzing combustion of sodium liquid droplet. A direct numerical simulation code was developed for numerical analysis of sodium liquid droplet in forced convection air flow. The numerical code named COMET, 'Sodium Droplet COmbustion Analysis METhodology using Direct Numerical Simulation in 3-Dimensional Coordinate'. The extended MAC method was used to calculate compressible forced air flow. Counter diffusion among chemical species is also calculated. Transport models of mass and energy between droplet and surrounding atmospheric air were developed. Equation-solving methods were used for computing multiphase equilibrium between sodium and air. Thermodynamic properties of chemical species were evaluated using dynamic theory of gases. Combustion of single sphere liquid sodium droplet in forced convection, constant velocity, uniform air flow was numerically simulated using COMET. Change of droplet diameter with time was closely agree with d 2 -law of droplet combustion theory. Spatial distributions of combustion rate and heat generation and formation, decomposition and movement of chemical species were analyzed. Quantitative calculations of heat generation and chemical species formation in spray combustion are enabled for various kinds of environmental condition by simulating liquid sodium droplet combustion using COMET. (author)
Goal scoring in soccer: A polar coordinate analysis of motor skills used by Lionel Messi
Marta eCastañer
2016-05-01
Full Text Available Soccer research has traditionally focused on technical and tactical aspects of team play, but few studies have analyzed motor skills in individual actions, such as goal scoring. The objective of this study was to investigate how Lionel Messi, one of the world’s top soccer players, uses his motor skills and laterality in individual attacking actions resulting in a goal. We analyzed 103 goals scored by Messi between over a decade in three competitions: La Liga (n = 74, Copa del Rey (n = 8, and the UEFA Champions League (n = 21. We used an ad hoc observation instrument (OSMOS-soccer player comprising 10 criteria and 50 categories; polar coordinate analysis, a powerful data reduction technique, revealed significant associations for body part and orientation, foot contact zone, turn direction, and locomotion. No significant associations were observed for pitch area or interaction with opponents. Our analysis confirms significant associations between different aspects of motor skill use by Messi immediately before scoring, namely use of lower limbs, foot contact zones, turn direction, use of wings, and orientation of body to move towards the goal. Studies of motor skills in soccer could shed light on the qualities that make certain players unique.
Vertical Coordination in Organic Food Chains: A Survey Based Analysis in France, Italy and Spain
Alessia Cavaliere
2016-06-01
Full Text Available The paper analyses characteristics of vertical relationships of organic supply chains with a specific focus on the processing and retailing sectors. The analysis takes into account different regions of the EU Mediterranean area. Data were collected through interviews using an ad hoc questionnaire. The survey was based on a sample of 306 firms, including processors and retailers. The analysis revealed that a relevant aspect for the processing firms of organic products concerns the guaranteeing of safety and quality levels for the products. The main tools to implement the quality management are based on the adoption of specific production regulations and quality controls. The premium price most frequently applied by processors ranges from 10% to 40% and similar values are revealed for retailers. The diffusion of supply contracts allows the vertical coordination between agriculture and processing firms in the organic supply chains. The main distribution channels for the processing firms are represented by specialised shops in organic products, direct sales and supermarkets.
Abstract interfaces for data analysis - component architecture for data analysis tools
Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.
2001-01-01
The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA
Design and analysis of automobile components using industrial procedures
Kedar, B.; Ashok, B.; Rastogi, Nisha; Shetty, Siddhanth
2017-11-01
Today’s automobiles depend upon mechanical systems that are crucial for aiding in the movement and safety features of the vehicle. Various safety systems such as Antilock Braking System (ABS) and passenger restraint systems have been developed to ensure that in the event of a collision be it head on or any other type, the safety of the passenger is ensured. On the other side, manufacturers also want their customers to have a good experience while driving and thus aim to improve the handling and the drivability of the vehicle. Electronics systems such as Cruise Control and active suspension systems are designed to ensure passenger comfort. Finally, to ensure optimum and safe driving the various components of a vehicle must be manufactured using the latest state of the art processes and must be tested and inspected with utmost care so that any defective component can be prevented from being sent out right at the beginning of the supply chain. Therefore, processes which can improve the lifetime of their respective components are in high demand and much research and development is done on these processes. With a solid base research conducted, these processes can be used in a much more versatile manner for different components, made up of different materials and under different input conditions. This will help increase the profitability of the process and also upgrade its value to the industry.
Analysis of soft rock mineral components and roadway failure mechanism
陈杰
2001-01-01
The mineral components and microstructure of soft rock sampled from roadway floor inXiagou pit are determined by X-ray diffraction and scanning electron microscope. Ccmbined withthe test of expansion and water softening property of the soft rock, the roadway failure mechanism is analyzed, and the reasonable repair supporting principle of roadway is put forward.
Analysis Of The Executive Components Of The Farmer Field School ...
The purpose of this study was to investigate the executive components of the Farmer Field School (FFS) project in Uromieh county of West Azerbaijan Province, Iran. All the members and non-members (as control group) of FFS pilots in Uromieh county (N= 98) were included in the study. Data were collected by use of ...
Principal Components Analysis of Job Burnout and Coping ...
The key component structure of job burnout were feelings of disgust, insomnia, headaches, weight loss or gain feeling of omniscient, pain of unexplained origin, hopelessness, agitation and workaholics, while the factor structure of coping strategies were development of self realistic picture, retaining hope, asking for help ...
Phenolic components, antioxidant activity, and mineral analysis of ...
In addition to being consumed as food, caper (Capparis spinosa L.) fruits are also used in folk medicine to treat inflammatory disorders, such as rheumatism. C. spinosa L. is rich in phenolic compounds, making it increasingly popular because of its components' potential benefits to human health. We analyzed a number of ...
Mohammadfam, Iraj; Bastani, Susan; Esaghi, Mahbobeh; Golmohamadi, Rostam; Saee, Ali
2015-03-01
The purpose of this study was to examine the cohesions status of the coordination within response teams in the emergency response team (ERT) in a refinery. For this study, cohesion indicators of social network analysis (SNA; density, degree centrality, reciprocity, and transitivity) were utilized to examine the coordination of the response teams as a whole network. The ERT of this research, which was a case study, included seven teams consisting of 152 members. The required data were collected through structured interviews and were analyzed using the UCINET 6.0 Social Network Analysis Program. The results reported a relatively low number of triple connections, poor coordination with key members, and a high level of mutual relations in the network with low density, all implying that there were low cohesions of coordination in the ERT. The results showed that SNA provided a quantitative and logical approach for the examination of the coordination status among response teams and it also provided a main opportunity for managers and planners to have a clear understanding of the presented status. The research concluded that fundamental efforts were needed to improve the presented situations.
A component analysis of the generation and release of isometric force in Parkinson's disease.
Jordan, N; Sagar, H J; Cooper, J A
1992-01-01
Paradigms of isometric force control allow study of the generation and release of movement in the absence of complications due to disordered visuomotor coordination. The onset and release of isometric force in Parkinson's disease (PD) was studied, using computerised determinants of latency of response and rate of force generation and release. Components of isometric force control were related to measures of cognitive, affective and clinical motor disability. The effects of treatment were dete...
Analysis and test of insulated components for rotary engine
Badgley, Patrick R.; Doup, Douglas; Kamo, Roy
1989-01-01
The direct-injection stratified-charge (DISC) rotary engine, while attractive for aviation applications due to its light weight, multifuel capability, and potentially low fuel consumption, has until now required a bulky and heavy liquid-cooling system. NASA-Lewis has undertaken the development of a cooling system-obviating, thermodynamically superior adiabatic rotary engine employing state-of-the-art thermal barrier coatings to thermally insulate engine components. The thermal barrier coating material for the cast aluminum, stainless steel, and ductile cast iron components was plasma-sprayed zirconia. DISC engine tests indicate effective thermal barrier-based heat loss reduction, but call for superior coefficient-of-thermal-expansion matching of materials and better tribological properties in the coatings used.
COMPONENTS OF THE UNEMPLOYMENT ANALYSIS IN CONTEMPORARY ECONOMIES
Ion Enea-SMARANDACHE
2010-03-01
Full Text Available The unemployment is a permanent phenomenon in majority countries of the world, either with advanced economies, either in course of developed economies, and the implications and the consequences are more complexes, so that, practically, the fight with unemployment becomes a fundamental objective for the economy politics. In context, the authors proposed to set apart essentially components for unemployment analyse with the scope of identification the measures and the instruments of counteracted.
Analysis of Femtosecond Timing Noise and Stability in Microwave Components
2011-01-01
To probe chemical dynamics, X-ray pump-probe experiments trigger a change in a sample with an optical laser pulse, followed by an X-ray probe. At the Linac Coherent Light Source, LCLS, timing differences between the optical pulse and x-ray probe have been observed with an accuracy as low as 50 femtoseconds. This sets a lower bound on the number of frames one can arrange over a time scale to recreate a 'movie' of the chemical reaction. The timing system is based on phase measurements from signals corresponding to the two laser pulses; these measurements are done by using a double-balanced mixer for detection. To increase the accuracy of the system, this paper studies parameters affecting phase detection systems based on mixers, such as signal input power, noise levels, temperature drift, and the effect these parameters have on components such as the mixers, splitters, amplifiers, and phase shifters. Noise data taken with a spectrum analyzer show that splitters based on ferrite cores perform with less noise than strip-line splitters. The data also shows that noise in specific mixers does not correspond with the changes in sensitivity per input power level. Temperature drift is seen to exist on a scale between 1 and 27 fs/ o C for all of the components tested. Results show that any components using more metallic conductor tend to exhibit more noise as well as more temperature drift. The scale of these effects is large enough that specific care should be given when choosing components and designing the housing of high precision microwave mixing systems for use in detection systems such as the LCLS. With these improvements, the timing accuracy can be improved to lower than currently possible.
Analysis of the Components of Economic Potential of Agricultural Enterprises
Vyacheslav Skobara; Volodymyr Podkopaev
2014-01-01
Problems of efficiency of enterprises are increasingly associated with the use of the economic potential of the company. This article addresses the structural components of the economic potential of agricultural enterprise, development and substantiation of the model of economic potential with due account of the peculiarities of agricultural production. Based on the study of various approaches to the potential structure established is the definition of of production, labour, financial and man...
Artemyeva, Z.; Žigová, Anna; Kirillova, N.; Šťastný, Martin; Holubík, O.; Podrázský, V.
2017-01-01
Roč. 63, č. 13 (2017), s. 1838-1851 ISSN 0365-0340 Institutional support: RVO:67985831 Keywords : land use * aggregate stability * organo-clay complexes * dynamic light scattering * phase analysis light scattering * color coordinates Subject RIV: DF - Soil Science OBOR OECD: Soil science Impact factor: 2.137, year: 2016
Cassandra L. Brown
2012-01-01
Full Text Available Social activity is typically viewed as part of an engaged lifestyle that may help mitigate the deleterious effects of advanced age on cognitive function. As such, social activity has been examined in relation to cognitive abilities later in life. However, longitudinal evidence for this hypothesis thus far remains inconclusive. The current study sought to clarify the relationship between social activity and cognitive function over time using a coordinated data analysis approach across four longitudinal studies. A series of multilevel growth models with social activity included as a covariate is presented. Four domains of cognitive function were assessed: reasoning, memory, fluency, and semantic knowledge. Results suggest that baseline social activity is related to some, but not all, cognitive functions. Baseline social activity levels failed to predict rate of decline in most cognitive abilities. Changes in social activity were not consistently associated with cognitive functioning. Our findings do not provide consistent evidence that changes in social activity correspond to immediate benefits in cognitive functioning, except perhaps for verbal fluency.
Global Analysis of miRNA Gene Clusters and Gene Families Reveals Dynamic and Coordinated Expression
Li Guo
2014-01-01
Full Text Available To further understand the potential expression relationships of miRNAs in miRNA gene clusters and gene families, a global analysis was performed in 4 paired tumor (breast cancer and adjacent normal tissue samples using deep sequencing datasets. The compositions of miRNA gene clusters and families are not random, and clustered and homologous miRNAs may have close relationships with overlapped miRNA species. Members in the miRNA group always had various expression levels, and even some showed larger expression divergence. Despite the dynamic expression as well as individual difference, these miRNAs always indicated consistent or similar deregulation patterns. The consistent deregulation expression may contribute to dynamic and coordinated interaction between different miRNAs in regulatory network. Further, we found that those clustered or homologous miRNAs that were also identified as sense and antisense miRNAs showed larger expression divergence. miRNA gene clusters and families indicated important biological roles, and the specific distribution and expression further enrich and ensure the flexible and robust regulatory network.
Yousefi Nooraie, Reza; Khan, Sobia; Gutberg, Jennifer; Baker, G Ross
2018-01-01
Although implementation models broadly recognize the importance of social relationships, our knowledge about applying social network analysis (SNA) to formative, process, and outcome evaluations of health system interventions is limited. We explored applications of adopting an SNA lens to inform implementation planning, engagement and execution, and evaluation. We used Health Links, a province-wide program in Canada aiming to improve care coordination among multiple providers of high-needs patients, as an example of a health system intervention. At the planning phase, an SNA can depict the structure, network influencers, and composition of clusters at various levels. It can inform the engagement and execution by identifying potential targets (e.g., opinion leaders) and by revealing structural gaps and clusters. It can also be used to assess the outcomes of the intervention, such as its success in increasing network connectivity; changing the position of certain actors; and bridging across specialties, organizations, and sectors. We provided an overview of how an SNA lens can shed light on the complexity of implementation along the entire implementation pathway, by revealing the relational barriers and facilitators, the application of network-informed and network-altering interventions, and testing hypotheses on network consequences of the implementation.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Probabilistic structural analysis of aerospace components using NESSUS
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
Compressive Online Robust Principal Component Analysis with Multiple Prior Information
Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen
-rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...
Seismic fragility analysis of structural components for HFBR facilities
Park, Y.J.; Hofmayer, C.H.
1992-01-01
The paper presents a summary of recently completed seismic fragility analyses of the HFBR facilities. Based on a detailed review of past PRA studies, various refinements were made regarding the strength and ductility evaluation of structural components. Available laboratory test data were analysed to evaluate the formulations used to predict the ultimate strength and deformation capacities of steel, reinforced concrete and masonry structures. The biasness and uncertainties were evaluated within the framework of the fragility evaluation methods widely accepted in the nuclear industry. A few examples of fragility calculations are also included to illustrate the use of the presented formulations
The ethical component of professional competence in nursing: an analysis.
Paganini, Maria Cristina; Yoshikawa Egry, Emiko
2011-07-01
The purpose of this article is to initiate a philosophical discussion about the ethical component of professional competence in nursing from the perspective of Brazilian nurses. Specifically, this article discusses professional competence in nursing practice in the Brazilian health context, based on two different conceptual frameworks. The first framework is derived from the idealistic and traditional approach while the second views professional competence through the lens of historical and dialectical materialism theory. The philosophical analyses show that the idealistic view of professional competence differs greatly from practice. Combining nursing professional competence with philosophical perspectives becomes a challenge when ideals are opposed by the reality and implications of everyday nursing practice.
Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)
2003-06-15
In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters
Kang, Ho Yang; Kim, Ki Bok
2003-01-01
In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters
Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bostelmann, F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on
The Blame Game: Performance Analysis of Speaker Diarization System Components
Huijbregts, M.A.H.; Wooters, Chuck
2007-01-01
In this paper we discuss the performance analysis of a speaker diarization system similar to the system that was submitted by ICSI at the NIST RT06s evaluation benchmark. The analysis that is based on a series of oracle experiments, provides a good understanding of the performance of each system
Coordinated Pitch & Torque Control of Large-Scale Wind Turbine Based on Pareto Eciency Analysis
Lin, Zhongwei; Chen, Zhenyu; Wu, Qiuwei
2018-01-01
For the existing pitch and torque control of the wind turbine generator system (WTGS), further development on coordinated control is necessary to improve effectiveness for practical applications. In this paper, the WTGS is modeled as a coupling combination of two subsystems: the generator torque...... control subsystem and blade pitch control subsystem. Then, the pole positions in each control subsystem are adjusted coordinately to evaluate the controller participation and used as the objective of optimization. A two-level parameters-controllers coordinated optimization scheme is proposed and applied...... to optimize the controller coordination based on the Pareto optimization theory. Three solutions are obtained through optimization, which includes the optimal torque solution, optimal power solution, and satisfactory solution. Detailed comparisons evaluate the performance of the three selected solutions...
Matyushenko, N.N.; Titov, Yu.G.
1982-01-01
Programs of atom coordinate generation and space symmetry groups in a form of equivalent point systems are presented. Programs of generation and coordinate output from an on-line storage are written in the FORTRAN language for the ES computer. They may be used in laboratories specialized in studying atomic structure and material properties, in colleges and by specialists in other fields of physics and chemistry
Moura, Felipe Arruda; van Emmerik, Richard E A; Santana, Juliana Exel; Martins, Luiz Eduardo Barreto; Barros, Ricardo Machado Leite de; Cunha, Sergio Augusto
2016-12-01
The purpose of this study was to investigate the coordination between teams spread during football matches using cross-correlation and vector coding techniques. Using a video-based tracking system, we obtained the trajectories of 257 players during 10 matches. Team spread was calculated as functions of time. For a general coordination description, we calculated the cross-correlation between the signals. Vector coding was used to identify the coordination patterns between teams during offensive sequences that ended in shots on goal or defensive tackles. Cross-correlation showed that opponent teams have a tendency to present in-phase coordination, with a short time lag. During offensive sequences, vector coding results showed that, although in-phase coordination dominated, other patterns were observed. We verified that during the early stages, offensive sequences ending in shots on goal present greater anti-phase and attacking team phase periods, compared to sequences ending in tackles. Results suggest that the attacking team may seek to present a contrary behaviour of its opponent (or may lead the adversary behaviour) in the beginning of the attacking play, regarding to the distribution strategy, to increase the chances of a shot on goal. The techniques allowed detecting the coordination patterns between teams, providing additional information about football dynamics and players' interaction.
Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah
2018-05-22
The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.
Saneyoshi, Ayako; Michimata, Chikashi
2009-01-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…
Dynamic analysis and qualification test of nuclear components
Kim, B.K.; Lee, C.H.; Park, S.H.; Kim, Y.M.; Kim, B.S.; Kim, I.G.; Chung, C.W.; Kim, Y.M.
1981-01-01
This report contains the study on the dynamic characteristics of Wolsung fuel rod and on the dynamic balancing of rotating machinery to evaluate the performance of nuclear reactor components. The study on the dynamic characteristics of Wolsung fuel rod was carried out by both experimental and theoretical methods. Forced vibration testing of actual Wolsung fuel rod using sine sweep and sine dwell excitation was conducted to find the dynamic and nonlinear characteristics of the fuel rod. The data obtained by the test were used to analyze the nonlinear impact characteristics of the fuel rod which has a motion-constraint stop in the center of the rod. The parameters used in the test were the input force level of the exciter, the clearance gap between the fuel rod and the motion constraints, and the frequencies. Test results were in good agreement with the analytical results
Dissolution And Analysis Of Yellowcake Components For Fingerprinting UOC Sources
Hexel, Cole R.; Bostick, Debra A.; Kennedy, Angel K.; Begovich, John M.; Carter, Joel A.
2012-01-01
There are a number of chemical and physical parameters that might be used to help elucidate the ore body from which uranium ore concentrate (UOC) was derived. It is the variation in the concentration and isotopic composition of these components that can provide information as to the identity of the ore body from which the UOC was mined and the type of subsequent processing that has been undertaken. Oak Ridge National Laboratory (ORNL) in collaboration with Lawrence Livermore and Los Alamos National Laboratories is surveying ore characteristics of yellowcake samples from known geologic origin. The data sets are being incorporated into a national database to help in sourcing interdicted material, as well as aid in safeguards and nonproliferation activities. Geologic age and attributes from chemical processing are site-specific. Isotopic abundances of lead, neodymium, and strontium provide insight into the provenance of geologic location of ore material. Variations in lead isotopes are due to the radioactive decay of uranium in the ore. Likewise, neodymium isotopic abundances are skewed due to the radiogenic decay of samarium. Rubidium decay similarly alters the isotopic signature of strontium isotopic composition in ores. This paper will discuss the chemical processing of yellowcake performed at ORNL. Variations in lead, neodymium, and strontium isotopic abundances are being analyzed in UOC from two geologic sources. Chemical separation and instrumental protocols will be summarized. The data will be correlated with chemical signatures (such as elemental composition, uranium, carbon, and nitrogen isotopic content) to demonstrate the utility of principal component and cluster analyses to aid in the determination of UOC provenance.
Wang, Xin; Birch, Stephen; Zhu, Weiming; Ma, Huifen; Embrett, Mark; Meng, Qingyue
2016-10-12
Increases in health care utilization and costs, resulting from the rising prevalence of chronic conditions related to the aging population, is exacerbated by a high level of fragmentation that characterizes health care systems in China. There have been several pilot studies in China, aimed at system-level care coordination and its impact on the full integration of health care system, but little is known about their practical effects. Huangzhong County is one of the pilot study sites that introduced organizational integration (a dimension of integrated care) among health care institutions as a means to improve system-level care coordination. The purposes of this study are to examine the effect of organizational integration on system-level care coordination and to identify factors influencing care coordination and hence full integration of county health care systems in rural China. We chose Huangzhong and Hualong counties in Qinghai province as study sites, with only Huangzhong having implemented organizational integration. A mixed methods approach was used based on (1) document analysis and expert consultation to develop Best Practice intervention packages; (2) doctor questionnaires, identifying care coordination from the perspective of service provision. We measured service provision with gap index, overlap index and over-provision index, by comparing observed performance with Best Practice; (3) semi-structured interviews with Chiefs of Medicine in each institution to identify barriers to system-level care coordination. Twenty-nine institutions (11 at county-level, 6 at township-level and 12 at village-level) were selected producing surveys with a total of 19 schizophrenia doctors, 23 diabetes doctors and 29 Chiefs of Medicine. There were more care discontinuities for both diabetes and schizophrenia in Huangzhong than in Hualong. Overall, all three index scores (measuring service gaps, overlaps and over-provision) showed similar tendencies for the two conditions
Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie
2011-11-01
The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.
Economic analysis of the cross-border coordination of operation in the European power system
Janssen, Tanguy
2014-01-01
The electricity high voltage transmission networks are interconnected over most of the continents but this is not the case of the power system organizations. Indeed, as described with the concept of integrated power system, the organization over these large networks is divided by several kinds of internal borders. In this context, the research object, the cross-border coordination of operation, is a set of coordination arrangements over internal borders between differing regulatory, technical and market designs. These arrangements can include for instance the famous market couplings, some cost-sharing agreements or common security assessments among several other solutions. The existence and improvement of the cross-border coordination of operation can be beneficial to the whole integrated power system. This statement is verified in the European case as in 2012 where several regional and continental coordination arrangements are successfully implemented.In order to benefit from the European experience and contribute to support the European improvement process, this thesis investigates the cross-border coordination of operation in the European case with four angles of study. First, a modular framework is built to describe the existing solutions and the implementation choices from a regulatory point of view. Second, the thesis analyses the tools available to assess the impact of an evolution of the cross-border coordination. Third, the role of the European Union (EU) is described as critical both for the existing arrangements and to support the improvement process. The last angle of study focuses on two dimensions of the economic modes of coordination between transmission system operators. (author)
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Topochemical Analysis of Cell Wall Components by TOF-SIMS.
Aoki, Dan; Fukushima, Kazuhiko
2017-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.
Aerothermoelastic analysis of panel flutter based on the absolute nodal coordinate formulation
Abbas, Laith K., E-mail: laithabbass@yahoo.com; Rui, Xiaoting, E-mail: ruixt@163.com [Nanjing University of Science and Technology, Institute of Launch Dynamics (China); Marzocca, Piergiovanni, E-mail: pmarzocc@clarkson.edu [Clarkson University, Mechanical and Aeronautical Engineering Department (United States)
2015-02-15
Panels of reentry vehicles are subjected to a wide range of flow conditions during ascent and reentry phases. The flow can vary from subsonic continuum flow to hypersonic rarefied flow with wide ranging dynamic pressure and associated aerodynamic heating. One of the main design considerations is the assurance of safety against panel flutter under the flow conditions characterized by sever thermal environment. This paper deals with supersonic/hypersonic flutter analysis of panels exposed to a temperature field. A 3-D rectangular plate element of variable thickness based on absolute nodal coordinate formulation (ANCF) has been developed for the structural model and subjected to an assumed thermal profile that can result from any residual heat seeping into the metallic panels through the thermal protection systems. A continuum mechanics approach for the definition of the elastic forces within the finite element is considered. Both shear strain and transverse normal strain are taken into account. The aerodynamic force is evaluated by considering the first-order piston theory to linearize the potential flow and is coupled with the structural model to account for pressure loading. A provision is made to take into account the effect of arbitrary flow directions with respect to the panel edges. Aerothermoelastic equations using ANCF are derived and solved numerically. Values of critical dynamic pressure are obtained by a modal approach, in which the mode shapes are obtained by ANCF. A detailed parametric study is carried out to observe the effects of different temperature loadings, flow angle directions, and aspect ratios on the flutter boundary.
Aerothermoelastic analysis of panel flutter based on the absolute nodal coordinate formulation
Abbas, Laith K.; Rui, Xiaoting; Marzocca, Piergiovanni
2015-01-01
Panels of reentry vehicles are subjected to a wide range of flow conditions during ascent and reentry phases. The flow can vary from subsonic continuum flow to hypersonic rarefied flow with wide ranging dynamic pressure and associated aerodynamic heating. One of the main design considerations is the assurance of safety against panel flutter under the flow conditions characterized by sever thermal environment. This paper deals with supersonic/hypersonic flutter analysis of panels exposed to a temperature field. A 3-D rectangular plate element of variable thickness based on absolute nodal coordinate formulation (ANCF) has been developed for the structural model and subjected to an assumed thermal profile that can result from any residual heat seeping into the metallic panels through the thermal protection systems. A continuum mechanics approach for the definition of the elastic forces within the finite element is considered. Both shear strain and transverse normal strain are taken into account. The aerodynamic force is evaluated by considering the first-order piston theory to linearize the potential flow and is coupled with the structural model to account for pressure loading. A provision is made to take into account the effect of arbitrary flow directions with respect to the panel edges. Aerothermoelastic equations using ANCF are derived and solved numerically. Values of critical dynamic pressure are obtained by a modal approach, in which the mode shapes are obtained by ANCF. A detailed parametric study is carried out to observe the effects of different temperature loadings, flow angle directions, and aspect ratios on the flutter boundary
Neural signatures of trust in reciprocity: A coordinate-based meta-analysis.
Bellucci, Gabriele; Chernyak, Sergey V; Goodyear, Kimberly; Eickhoff, Simon B; Krueger, Frank
2017-03-01
Trust in reciprocity (TR) is defined as the risky decision to invest valued resources in another party with the hope of mutual benefit. Several fMRI studies have investigated the neural correlates of TR in one-shot and multiround versions of the investment game (IG). However, an overall characterization of the underlying neural networks remains elusive. Here, a coordinate-based meta-analysis was employed (activation likelihood estimation method, 30 articles) to investigate consistent brain activations in each of the IG stages (i.e., the trust, reciprocity and feedback stage). Results showed consistent activations in the anterior insula (AI) during trust decisions in the one-shot IG and decisions to reciprocate in the multiround IG, likely related to representations of aversive feelings. Moreover, decisions to reciprocate also consistently engaged the intraparietal sulcus, probably involved in evaluations of the reciprocity options. On the contrary, trust decisions in the multiround IG consistently activated the ventral striatum, likely associated with reward prediction error signals. Finally, the dorsal striatum was found consistently recruited during the feedback stage of the multiround IG, likely related to reinforcement learning. In conclusion, our results indicate different neural networks underlying trust, reciprocity, and feedback learning. These findings suggest that although decisions to trust and reciprocate may elicit aversive feelings likely evoked by the uncertainty about the decision outcomes and the pressing requirements of social standards, multiple interactions allow people to build interpersonal trust for cooperation via a learning mechanism by which they arguably learn to distinguish trustworthy from untrustworthy partners. Hum Brain Mapp 38:1233-1248, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Daniel, Michelle M; Ross, Paula; Stalmeijer, Renée E; de Grave, Willem
2018-01-01
Phenomenon: Interdisciplinary coteaching has become a popular pedagogic model in medical education, yet there is insufficient research to guide effective practices in this context. Coteaching relationships are not always effective, which has the potential to affect the student experience. The purpose of this study was to explore interdisciplinary coteaching relationships between a physician (MD) and social behavioral scientist (SBS) in an undergraduate clinical skills course. We aimed to gain an in-depth understanding of what teachers perceive as influencing the quality of relationships to begin to construct a framework for collaborative teaching in medical education. A qualitative study was conducted consisting of 12 semistructured interviews (6 MD and 6 SBS) and 2 monodisciplinary focus groups. Sampling was purposive and aimed at maximal variation from among 64 possible faculty. The data were analyzed using the constant comparative method to develop a grounded theory. Five major themes resulted from the analysis that outline a framework for interdisciplinary coteaching: respect, shared goals, shared knowledge and understanding, communication, and complementary pairings. Insights: The first 4 themes align with elements of relational coordination theory, an organizational theory of collaborative practice that describes how work roles interact. The complementary pairings extend this theory from work roles to individuals, with unique identities and personal beliefs and values about teaching. Prior studies on coteaching have not provided a clear linkage to theory. The conceptual framework helps suggest future directions for coteaching research and has practical implications for administrative practices and faculty development. These findings contribute to the sparse research in medical education on interdisciplinary coteaching relationships.
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Multilevel component analysis of time-resolved metabolic fingerprinting data
Jansen, J.J.; Hoefsloot, H.C.J.; Greef, J. van der; Timmerman, M.E.; Smilde, A.K.
2005-01-01
Genomics-based technologies in systems biology have gained a lot of popularity in recent years. These technologies generate large amounts of data. To obtain information from this data, multivariate data analysis methods are required. Many of the datasets generated in genomics are multilevel
A Principal Components Analysis of the Rathus Assertiveness Schedule.
Law, H. G.; And Others
1979-01-01
Investigated the adequacy of the Rathus Assertiveness Schedule (RAS) as a global measure of assertiveness. Analysis indicated that the RAS does not provide a unidimensional index of assertiveness, but rather measures a number of factors including situation-specific assertive behavior, aggressiveness, and a more general assertiveness. (Author)
Interoperability Assets for Patient Summary Components: A Gap Analysis.
Heitmann, Kai U; Cangioli, Giorgio; Melgara, Marcello; Chronaki, Catherine
2018-01-01
The International Patient Summary (IPS) standards aim to define the specifications for a minimal and non-exhaustive Patient Summary, which is specialty-agnostic and condition-independent, but still clinically relevant. Meanwhile, health systems are developing and implementing their own variation of a patient summary while, the eHealth Digital Services Infrastructure (eHDSI) initiative is deploying patient summary services across countries in the Europe. In the spirit of co-creation, flexible governance, and continuous alignment advocated by eStandards, the Trillum-II initiative promotes adoption of the patient summary by engaging standards organizations, and interoperability practitioners in a community of practice for digital health to share best practices, tools, data, specifications, and experiences. This paper compares operational aspects of patient summaries in 14 case studies in Europe, the United States, and across the world, focusing on how patient summary components are used in practice, to promote alignment and joint understanding that will improve quality of standards and lower costs of interoperability.
Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo
2011-01-01
Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed
Sabaghnia Naser
2013-01-01
Full Text Available Multi-environmental trials have significant main effects and significant multiplicative genotype × environment (GE interaction effect. Principal coordinate analysis (PCOA offers a more appropriate statistical analysis to deal with such situations, compared to traditional statistical methods. Eighteen bread wheat genotypes were grown in four semi-arid regions over three year seasons to study the GE interaction and yield stability and obtained data on grain yield were analyzed using PCOA. Combined analysis of variance indicated that all of the studied effects including the main effects of genotype and environments as well as the GE interaction were highly significant. According to grand means and total mean yield, test environments were grouped to two main groups as high mean yield (H and low mean yield (L. There were five H test environments and six L test environments which analyzed in the sequential cycles. For each cycle, both scatter point diagram and minimum spanning tree plot were drawn. The identified most stable genotypes with dynamic stability concept and based on the minimum spanning tree plots and centroid distances were G1 (3310.2 kg ha-1 and G5 (3065.6 kg ha-1, and therefore could be recommended for unfavorable or poor conditions. Also, genotypes G7 (3047.2 kg ha-1 and G16 (3132.3 kg ha-1 were located several times in the vertex positions of high cycles according to the principal coordinates analysis. The principal coordinates analysis provided useful and interesting ways of investigating GE interaction of barley genotypes. Finally, the results of principal coordinates analysis in general confirmed the breeding value of the genotypes, obtained on the basis of the yield stability evaluation.
PV System Component Fault and Failure Compilation and Analysis.
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
2018-02-01
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
INTEGRATION OF SYSTEM COMPONENTS AND UNCERTAINTY ANALYSIS - HANFORD EXAMPLES
Wood, M.I.
2009-01-01
(sm b ullet) Deterministic 'One Off' analyses as basis for evaluating sensitivity and uncertainty relative to reference case (sm b ullet) Spatial coverage identical to reference case (sm b ullet) Two types of analysis assumptions - Minimax parameter values around reference case conditions - 'What If' cases that change reference case condition and associated parameter values (sm b ullet) No conclusions about likelihood of estimated result other than' qualitative expectation that actual outcome should tend toward reference case estimate
Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando
2013-01-01
Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...
Principle Component Analysis of AIRS and CrIS Data
Aumann, H. H.; Manning, Evan
2015-01-01
Synthetic Eigen Vectors (EV) used for the statistical analysis of the PC reconstruction residual of large ensembles of data are a novel tool for the analysis of data from hyperspectral infrared sounders like the Atmospheric Infrared Sounder (AIRS) on the EOS Aqua and the Cross-track Infrared Sounder (CrIS) on the SUOMI polar orbiting satellites. Unlike empirical EV, which are derived from the observed spectra, the synthetic EV are derived from a large ensemble of spectra which are calculated assuming that, given a state of the atmosphere, the spectra created by the instrument can be accurately calculated. The synthetic EV are then used to reconstruct the observed spectra. The analysis of the differences between the observed spectra and the reconstructed spectra for Simultaneous Nadir Overpasses of tropical oceans reveals unexpected differences at the more than 200 mK level under relatively clear conditions, particularly in the mid-wave water vapor channels of CrIS. The repeatability of these differences using independently trained SEV and results from different years appears to rule out inconsistencies in the radiative transfer algorithm or the data simulation. The reasons for these discrepancies are under evaluation.
CONSTRUCTION THEORY AND NOISE ANALYSIS METHOD OF GLOBAL CGCS2000 COORDINATE FRAME
Z. Jiang
2018-04-01
Full Text Available The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system network and the mainland GNSS (Global Navigation Satellite System network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.
Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...
Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...
Maler, Philippe; Erhardt, Jean-Bernard; Ourliac, Jean-Paul
2015-09-01
This report is the third of a series dealing with the coordination of ministerial actions in favor of the use of liquefied natural gas (LNG) as fuel in transports. LNG is an important potential substitute to diesel fuel in road transport and would allow significant abatement of nitrogen oxides emissions. Bio-LNG is ten times less polluting than fossil fuel LNG and thus important efforts are to be made in bio-LNG R and D. An important work has been carried out for adapting EU regulations and standards to LNG vehicles and LNG supply developments. This report presents, first, a summary of the report's recommendations and the aim of this coordination study, and, then, treats more thoroughly of the different coordination aspects: 1 - European framework of energy transition in the road freight transport (differences with maritime transport, CO 2 emissions abatement, trucks pollution and fuel quality standards, trucks technical specifications and equipment, fuel taxes in EU countries); 2 - European policy and national actions in favour of LNG development for road transport (LNG as alternate fuel, the Paris agreement, the French national energy plan); 3 - Environmental benefits of LNG in road transport (public health impacts, nitrogen oxides abatement, divergent views and expertise, LNG and CO 2 abatement measures, bio-LNG environmental evaluation; 4 - LNG development actors in road transport and the administrative coordination (professional organizations, public stakeholders, LNG topics information dissemination at the Ministry); 5 - LNG development in road transport at the worldwide, European and national scales; 6 - European regulations and standards allowing trucks LNG fueling and circulation (standard needs, users information, regulation works); 7 - Common rules to define and implement for personnel training; 8 - reflexion on LNG taxation; 9 - support policy for a road transport LNG supply chain (infrastructures, European financing, lessons learnt from maritime
Nuclear plant components: mechanical analysis and lifetime evaluation
Chator, T.
1993-09-01
This paper concerns the methodology adopted by the Research and Development Division to handle mechanical problems found in structures and machines. Usually, these often very complex studies (3-D structures, complex loadings, non linear behavior laws) call for advanced tools and calculation means. In order to do these complex studies, R and D Division is developing a software. It handles very complex thermo-mechanical analysis using the Finite Element Method. It enables us to analyse static, dynamic, elasto-plastic problems as well as contact problems or evaluating damage and lifetime of structures. This paper will be illustrated by actual industrial case examples. The major ones will be dealing with: 1. Analysis of a new impeller/shaft assembly of a primary coolant pump. The 3D meshing is submitted simultaneously to thermal load, pressure, hydraulic, centrifugal and axial forces and clamping of studs; contacts between shaft/impeller, nuts bearing side/shaft bearing side. For this study, we have developed a new method to handle the clamping of studs. The stud elongation value is given into the software which automatically computes the distorsions between both the structures in contact and then the final position of bearing areas (using an iterative non-linear algorithm of modified Newton-Raphson type). 2. Analysis of the stress intensity factor of crack. The 3D meshing (representing the crack) is submitted simultaneously to axial and radial forces. In this case, we use the Theta method to calculate the energy restitution rate in order to determine the stress intensity factors. (authors). 7 figs., 1 tab., 3 refs
Blind Component Separation in Wavelet Space: Application to CMB Analysis
J. Delabrouille
2005-09-01
Full Text Available It is a recurrent issue in astronomical data analysis that observations are incomplete maps with missing patches or intentionally masked parts. In addition, many astrophysical emissions are nonstationary processes over the sky. All these effects impair data processing techniques which work in the Fourier domain. Spectral matching ICA (SMICA is a source separation method based on spectral matching in Fourier space designed for the separation of diffuse astrophysical emissions in cosmic microwave background observations. This paper proposes an extension of SMICA to the wavelet domain and demonstrates the effectiveness of wavelet-based statistics for dealing with gaps in the data.
Importance Analysis of In-Service Testing Components for Ulchin Unit 3
Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha
2002-01-01
We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)
Modeling and Analysis of Component Faults and Reliability
Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter
2016-01-01
This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....
Thermal Analysis of Fermilab Mu2e Beamstop and Structural Analysis of Beamline Components
Narug, Colin S. [Northern Illinois U.
2018-01-01
The Mu2e project at Fermilab National Accelerator Laboratory aims to observe the unique conversion of muons to electrons. The success or failure of the experiment to observe this conversion will further the understanding of the standard model of physics. Using the particle accelerator, protons will be accelerated and sent to the Mu2e experiment, which will separate the muons from the beam. The muons will then be observed to determine their momentum and the particle interactions occur. At the end of the Detector Solenoid, the internal components will need to absorb the remaining particles of the experiment using polymer absorbers. Because the internal structure of the beamline is in a vacuum, the heat transfer mechanisms that can disperse the energy generated by the particle absorption is limited to conduction and radiation. To determine the extent that the absorbers will heat up over one year of operation, a transient thermal finite element analysis has been performed on the Muon Beam Stop. The levels of energy absorption were adjusted to determine the thermal limit for the current design. Structural finite element analysis has also been performed to determine the safety factors of the Axial Coupler, which connect and move segments of the beamline. The safety factor of the trunnion of the Instrument Feed Through Bulk Head has also been determined for when it is supporting the Muon Beam Stop. The results of the analysis further refine the design of the beamline components prior to testing, fabrication, and installation.
Data-Parallel Mesh Connected Components Labeling and Analysis
Harrison, Cyrus; Childs, Hank; Gaither, Kelly
2011-04-10
We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
A Bayesian Analysis of Unobserved Component Models Using Ox
Charles S. Bos
2011-05-01
Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.
Determination of inorganic component in plastics by neutron activation analysis
Mateus, Sandra Fonseca; Saiki, Mitiko
1995-01-01
In order to identify possible sources of heavy metals in municipal solid waste incinerator ashes, plastic materials originated mainly from household waste were analyzed by using instrumental neutron activation analysis method. Plastic samples and synthetic standards of elements were irradiated at the IEA-R1 nuclear reactor for 8 h under thermal neutron flux of about 10 13 n cm -2 s -1 . After adequate decay time, counting were carried out using a hyperpure Ge detector and the concentrations of the elements As, Ba, Br, Cd, Co, Cr, Fe, Sb, Sc, Se, Sn, Ti and Zn were determined. For some samples, not all these elements were detected. Besides, the range of concentrations determined in similar type and colored samples varied from a few ppb to percentage. In general, colored and opaque plastic samples presented higher concentrations of the elements than those obtained from transparent and milky plastics. Precision of the results was also evaluated. (author). 3 refs., 2 tabs
Component Analysis of Bee Venom from lune to September
Ki Rok Kwon
2007-06-01
Full Text Available Objectives : The aim of this study was to observe variation of Bee Venom content from the collection period. Methods : Content analysis of Bee Venom was rendered using HPLC method by standard melittin Results : Analyzing melittin content using HPLC, 478.97mg/g at june , 493.89mg/g at july, 468.18mg/g at August and 482.15mg/g was containing in Bee Venom at september. So the change of melittin contents was no significance from June to September. Conclusion : Above these results, we concluded carefully that collecting time was not important factor for the quality control of Bee Venom, restricted the period from June to September.
Christensen, Steen; Serbus, Laura Renee
2015-01-01
Two-component regulatory systems are commonly used by bacteria to coordinate intracellular responses with environmental cues. These systems are composed of functional protein pairs consisting of a sensor histidine kinase and cognate response regulator. In contrast to the well-studied Caulobacter crescentus system, which carries dozens of these pairs, the streamlined bacterial endosymbiont Wolbachia pipientis encodes only two pairs: CckA/CtrA and PleC/PleD. Here, we used bioinformatic tools to compare characterized two-component system relays from C. crescentus, the related Anaplasmataceae species Anaplasma phagocytophilum and Ehrlichia chaffeensis, and 12 sequenced Wolbachia strains. We found the core protein pairs and a subset of interacting partners to be highly conserved within Wolbachia and these other Anaplasmataceae. Genes involved in two-component signaling were positioned differently within the various Wolbachia genomes, whereas the local context of each gene was conserved. Unlike Anaplasma and Ehrlichia, Wolbachia two-component genes were more consistently found clustered with metabolic genes. The domain architecture and key functional residues standard for two-component system proteins were well-conserved in Wolbachia, although residues that specify cognate pairing diverged substantially from other Anaplasmataceae. These findings indicate that Wolbachia two-component signaling pairs share considerable functional overlap with other α-proteobacterial systems, whereas their divergence suggests the potential for regulatory differences and cross-talk. PMID:25809075
DeWitt, Natalie; Lohrmann, David K.; O'Neill, James; Clark, Jeffrey K.
2011-01-01
Background: The purpose of this study was to detect and document common themes among success stories, along with challenges, as related by participants in the Michiana Coordinated School Health Leadership Institute. Four-member teams from 18 Michigan and Indiana school districts participated in semiannual Institute workshops over a 3-year period…
Final report of coordination and cooperation with the European Union on embankment failure analysis
There has been an emphasis in the European Union (EU) community on the investigation of extreme flood processes and the uncertainties related to these processes. Over a 3-year period, the EU and the U.S. dam safety community (1) coordinated their efforts and collected information needed to integrate...
Performance analysis of coordination strategies in two-tier Heterogeneous Networks
Boukhedimi, Ikram
2016-08-11
Large scale multi-tier Heterogeneous Networks (HetNets) are expected to ensure a consistent quality of service (QoS) in 5G systems. Such networks consist of a macro base station (BS) equipped with a large number of antennas and a dense overlay of small cells. The small cells could be deployed within the same coverage of the macro-cell BS, thereby causing high levels of inter-cell interference. In this regard, coordinated beamforming techniques are considered as a viable solution to counteract the arising interference. The goal of this work is to analyze the efficiency of coordinated beamforming techniques in mitigating both intra-cell and inter-cell interference. In particular, we consider the downlink of a Time-division duplexing (TDD) massive multiple-input-multiple-output (MIMO) tier-HetNet and analyze different beamforming schemes together with different degrees of coordination between the BSs. We exploit random matrix theory tools in order to provide, in explicit form, deterministic equivalents for the average achievable rates in the macro-cell and the micro-cells. We prove that our theoretical derivations allow us to draw some conclusions regarding the role played by coordination strategies in reducing the inter-cell interference. These findings are finally validated by a selection of some numerical results. © 2016 IEEE.
Performance analysis of coordination strategies in two-tier Heterogeneous Networks
Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim
2016-01-01
Large scale multi-tier Heterogeneous Networks (HetNets) are expected to ensure a consistent quality of service (QoS) in 5G systems. Such networks consist of a macro base station (BS) equipped with a large number of antennas and a dense overlay of small cells. The small cells could be deployed within the same coverage of the macro-cell BS, thereby causing high levels of inter-cell interference. In this regard, coordinated beamforming techniques are considered as a viable solution to counteract the arising interference. The goal of this work is to analyze the efficiency of coordinated beamforming techniques in mitigating both intra-cell and inter-cell interference. In particular, we consider the downlink of a Time-division duplexing (TDD) massive multiple-input-multiple-output (MIMO) tier-HetNet and analyze different beamforming schemes together with different degrees of coordination between the BSs. We exploit random matrix theory tools in order to provide, in explicit form, deterministic equivalents for the average achievable rates in the macro-cell and the micro-cells. We prove that our theoretical derivations allow us to draw some conclusions regarding the role played by coordination strategies in reducing the inter-cell interference. These findings are finally validated by a selection of some numerical results. © 2016 IEEE.
Clarke, John-Paul B.; Brooks, James; McClain, Evan; Paladhi, Anwesha Roy; Li, Leihong; Schleicher, David; Saraf, Aditya; Timar, Sebastian; Crisp, Don; Bertino, Jason;
2012-01-01
This work involves the development of a concept that enhances integrated metroplex arrival and departure coordination, determines the temporal (the use of time separation for aircraft sharing the same airspace resources) and spatial (the use of different routes or vertical profiles for aircraft streams at any given time) impact of metroplex traffic coordination within the National Airspace System (NAS), and quantifies the benefits of the most desirable metroplex traffic coordination concept. Researching and developing metroplex concepts is addressed in this work that broadly applies across the range of airspace and airport demand characteristics envisioned for NextGen metroplex operations. The objective of this work is to investigate, formulate, develop models, and analyze an operational concept that mitigates issues specific to the metroplex or that takes advantage of unique characteristics of metroplex airports to improve efficiencies. The concept is an innovative approach allowing the NAS to mitigate metroplex interdependencies between airports, optimize metroplex arrival and departure coordination among airports, maximize metroplex airport throughput, minimize delay due to airport runway configuration changes, increase resiliency to disruptions, and increase the tolerance of the system to degrade gracefully under adverse conditions such as weather, traffic management initiatives, and delays in general.
Analysis of Large Flexible Body Deformation in Multibody Systems Using Absolute Coordinates
Dombrowski, Stefan von [Institute of Robotics and Mechatronics, German Aerospace Center (DLR) (Germany)], E-mail: stefan.von.dombrowski@dlr.de
2002-11-15
To consider large deformation problems in multibody system simulations a finite element approach, called absolute nodal coordinate.formulation,has been proposed. In this formulation absolute nodal coordinates and their material derivatives are applied to represent both deformation and rigid body motion. The choice of nodal variables allows a fully nonlinear representation of rigid body motion and can provide the exact rigid body inertia in the case of large rotations. The methodology is especially suited for but not limited to modeling of beams, cables and shells in multibody dynamics.This paper summarizes the absolute nodal coordinate formulation for a 3D Euler-Bernoulli beam model, in particular the definition of nodal variables, corresponding generalized elastic and inertia forces and equations of motion. The element stiffness matrix is a nonlinear function of the nodal variables even in the case of linearized strain/displacement relations. Nonlinear strain/displacement relations can be calculated from the global displacements using quadrature formulae.Computational examples are given which demonstrate the capabilities of the applied methodology. Consequences of the choice of shape.functions on the representation of internal forces are discussed. Linearized strain/displacement modeling is compared to the nonlinear approach and significant advantages of the latter, when using the absolute nodal coordinate formulation, are outlined.
Analysis of Large Flexible Body Deformation in Multibody Systems Using Absolute Coordinates
Dombrowski, Stefan von
2002-01-01
To consider large deformation problems in multibody system simulations a finite element approach, called absolute nodal coordinate.formulation,has been proposed. In this formulation absolute nodal coordinates and their material derivatives are applied to represent both deformation and rigid body motion. The choice of nodal variables allows a fully nonlinear representation of rigid body motion and can provide the exact rigid body inertia in the case of large rotations. The methodology is especially suited for but not limited to modeling of beams, cables and shells in multibody dynamics.This paper summarizes the absolute nodal coordinate formulation for a 3D Euler-Bernoulli beam model, in particular the definition of nodal variables, corresponding generalized elastic and inertia forces and equations of motion. The element stiffness matrix is a nonlinear function of the nodal variables even in the case of linearized strain/displacement relations. Nonlinear strain/displacement relations can be calculated from the global displacements using quadrature formulae.Computational examples are given which demonstrate the capabilities of the applied methodology. Consequences of the choice of shape.functions on the representation of internal forces are discussed. Linearized strain/displacement modeling is compared to the nonlinear approach and significant advantages of the latter, when using the absolute nodal coordinate formulation, are outlined
Competition analysis on the operating system market using principal component analysis
Brătucu, G.
2011-01-01
Full Text Available Operating system market has evolved greatly. The largest software producer in the world, Microsoft, dominates the operating systems segment. With three operating systems: Windows XP, Windows Vista and Windows 7 the company held a market share of 87.54% in January 2011. Over time, open source operating systems have begun to penetrate the market very strongly affecting other manufacturers. Companies such as Apple Inc. and Google Inc. penetrated the operating system market. This paper aims to compare the best-selling operating systems on the market in terms of defining characteristics. To this purpose the principal components analysis method was used.
Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard
2012-01-01
We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...
Bach, F.W.; Steiner, H.; Schreck, G.
1993-01-01
The present joint study performed by the Commissariat a l'energie atomique and the Universitaet Hannover and coordinated by the Commission of the European Communities was intended to analyse the results generated in a number of research contracts concerned with cutting tests in air and underwater, with consideration of the prevailing working conditions. The analysis has led to a large database, giving broadly-assessed information for the dismantling of radioactive components. The range of study was enlarged, where possible, to include recently obtained results outside the present research programme, consideration also being given to supplementary cutting tools and filtration systems not covered by the present programme. Data was concentrated in structured information packages on practical experience available for a series of cutting tools and filters. These were introduced into a computerized user-friendly databank, to be considered as a first-stage development, which should be continuously updated and possibly oriented in the future to an expert system
Lindstrom, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.
2003-01-01
The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt Gamma-ray Neutron Activation Analysis are summarized in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003
Lone, M.A.; Mughabghab, S.F.; Paviotti-Corcuera, R.
2001-06-01
This report summarizes the presentations, recommendations and conclusions of the Second Research Co-ordination Meeting on Development of a Database for Prompt γ-ray Neutron Activation Analysis. The purpose of this meeting was to review results achieved on the development of the database, discuss further developments and planning of the products ol this CRP. Actions to be taken were agreed upon with the aim to complete the project by the end of 2002. (author)
Lindstorm, Richard M.; Firestone, Richard B.; Paviotti-Corcuera, R.
2003-04-01
The main discussions and conclusions from the Third Co-ordination Meeting on the Development of a Database for Prompt γ-ray Neutron Activation Analysis are summarised in this report. All results were reviewed in detail, and the final version of the TECDOC and the corresponding software were agreed upon and approved for preparation. Actions were formulated with the aim of completing the final version of the TECDOC and associated software by May 2003. (author)
Independent component analysis of dynamic contrast-enhanced computed tomography images
Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)
2006-10-07
Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)
Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis
Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.;
2015-01-01
The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.
Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Decoding the auditory brain with canonical component analysis.
de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund
2018-05-15
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Using principal component analysis for selecting network behavioral anomaly metrics
Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex
2010-04-01
This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
Choi, S. Y.; Han, S. H.
2004-01-01
The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data
Analysis Components of the Digital Consumer Behavior in Romania
Cristian Bogdan Onete
2016-08-01
Full Text Available This article is investigating the Romanian consumer behavior in the context of the evolution of the online shopping. Given that online stores are a profitable business model in the area of electronic commerce and because the relationship between consumer digital Romania and its decision to purchase products or services on the Internet has not been sufficiently explored, this study aims to identify specific features of the new type of consumer and to examine the level of online shopping in Romania. Therefore a documentary study was carried out with statistic data regarding the volume and the number of transactions of the online shopping in Romania during 2010-2014, the type of products and services that Romanians are searching the Internet for and demographics of these people. In addition, to study more closely the online consumer behavior, and to interpret the detailed secondary data provided, an exploratory research was performed as a structured questionnaire with five closed questions on the distribution of individuals according to the gender category they belong (male or female; decision to purchase products / services in the virtual environment in the past year; the source of the goods / services purchased (Romanian or foreign sites; factors that have determined the consumers to buy products from foreign sites; categories of products purchased through online transactions from foreign merchants. The questionnaire was distributed electronically via Facebook social network users and the data collected was processed directly in the Facebook official app to create and interpret responses to surveys. The results of this research correlated with the official data reveals the following characteristics of the digital consumer in Romania: atypical European consumer, interested more in online purchases from abroad, influenced by the quality and price of the purchase. This paper assumed a careful analysis of the online acquisitions phenomenon and also
Analysis of Potential Energy Corridors Proposed by the Western Electricity Coordinating Council
Kuiper, James A.; Cantwell, Brian J.; Hlava, Kevin J.; Moore, H Robert; Orr, Andrew B.; Zvolanek, Emily A.
2014-02-24
This report, Analysis of Potential Energy Corridors Proposed by the Western Electricity Coordinating Council (WECC), was prepared by the Environmental Science Division of Argonne National Laboratory (Argonne). The intent of WECC’s work was to identify planning-level energy corridors that the Department of Energy (DOE) and its affiliates could study in greater detail. Argonne was tasked by DOE to analyze the WECC Proposed Energy Corridors in five topic areas for use in reviewing and revising existing corridors, as well as designating additional energy corridors in the 11 western states. In compliance with Section 368 of the Energy Policy Act of 2005 (EPAct), the Secretaries of Energy, Agriculture, and the Interior (Secretaries) published a Programmatic Environmental Impact Statement in 2008 to address the proposed designation of energy transport corridors on federal lands in the 11 western states. Subsequently, Records of Decision designating the corridors were issued in 2009 by the Bureau of Land Management (BLM) and the U.S. Forest Service (USFS). The 2012 settlement of a lawsuit, brought by The Wilderness Society and others against the United States, which identified environmental concerns for many of the corridors requires, among other things, periodic reviews of the corridors to assess the need for revisions, deletions, or additions. A 2013 Presidential Memorandum requires the Secretaries to undertake a continuing effort to identify and designate energy corridors. The WECC Proposed Energy Corridors and their analyses in this report provide key information for reviewing and revising existing corridors, as well as designating additional energy corridors in the 11 western states. Load centers and generation hubs identified in the WECC analysis, particularly as they reflect renewable energy development, would be useful in reviewing and potentially updating the designated Section 368 corridor network. Argonne used Geographic Information System (GIS) technology to
Extracting intrinsic functional networks with feature-based group independent component analysis.
Calhoun, Vince D; Allen, Elena
2013-04-01
There is increasing use of functional imaging data to understand the macro-connectome of the human brain. Of particular interest is the structure and function of intrinsic networks (regions exhibiting temporally coherent activity both at rest and while a task is being performed), which account for a significant portion of the variance in functional MRI data. While networks are typically estimated based on the temporal similarity between regions (based on temporal correlation, clustering methods, or independent component analysis [ICA]), some recent work has suggested that these intrinsic networks can be extracted from the inter-subject covariation among highly distilled features, such as amplitude maps reflecting regions modulated by a task or even coordinates extracted from large meta analytic studies. In this paper our goal was to explicitly compare the networks obtained from a first-level ICA (ICA on the spatio-temporal functional magnetic resonance imaging (fMRI) data) to those from a second-level ICA (i.e., ICA on computed features rather than on the first-level fMRI data). Convergent results from simulations, task-fMRI data, and rest-fMRI data show that the second-level analysis is slightly noisier than the first-level analysis but yields strikingly similar patterns of intrinsic networks (spatial correlations as high as 0.85 for task data and 0.65 for rest data, well above the empirical null) and also preserves the relationship of these networks with other variables such as age (for example, default mode network regions tended to show decreased low frequency power for first-level analyses and decreased loading parameters for second-level analyses). In addition, the best-estimated second-level results are those which are the most strongly reflected in the input feature. In summary, the use of feature-based ICA appears to be a valid tool for extracting intrinsic networks. We believe it will become a useful and important approach in the study of the macro
Vaidya spacetime in the diagonal coordinates
Berezin, V. A., E-mail: berezin@inr.ac.ru; Dokuchaev, V. I., E-mail: dokuchaev@inr.ac.ru; Eroshenko, Yu. N., E-mail: eroshenko@inr.ac.ru [Russian Academy of Sciences, Institute for Nuclear Research (Russian Federation)
2017-03-15
We have analyzed the transformation from initial coordinates (v, r) of the Vaidya metric with light coordinate v to the most physical diagonal coordinates (t, r). An exact solution has been obtained for the corresponding metric tensor in the case of a linear dependence of the mass function of the Vaidya metric on light coordinate v. In the diagonal coordinates, a narrow region (with a width proportional to the mass growth rate of a black hole) has been detected near the visibility horizon of the Vaidya accreting black hole, in which the metric differs qualitatively from the Schwarzschild metric and cannot be represented as a small perturbation. It has been shown that, in this case, a single set of diagonal coordinates (t, r) is insufficient to cover the entire range of initial coordinates (v, r) outside the visibility horizon; at least three sets of diagonal coordinates are required, the domains of which are separated by singular surfaces on which the metric components have singularities (either g{sub 00} = 0 or g{sub 00} = ∞). The energy–momentum tensor diverges on these surfaces; however, the tidal forces turn out to be finite, which follows from an analysis of the deviation equations for geodesics. Therefore, these singular surfaces are exclusively coordinate singularities that can be referred to as false fire-walls because there are no physical singularities on them. We have also considered the transformation from the initial coordinates to other diagonal coordinates (η, y), in which the solution is obtained in explicit form, and there is no energy–momentum tensor divergence.
Mehai, L.; Paultre, P.; Leger, P.
1992-01-01
In the design of dams to withstand seismic events, recent studies have shown that the dam-foundation and dam-reservoir interactions have a significant influence on the dynamic response of the dam. The hypothesis of proportional damping is not realistic for such structures, in which the mechanisms of energy dissipation present notable differences between their various components. A comparative study is presented of different methods of resolution of linear systems with non-proportional damping, using recent techniques of coordinate reduction. Parametric studies were conducted on a 2-dimensional finite element model of a concrete gravity dam-foundation system. The comparison focuses essentially on the numerical efficiency and precision in the calculation of dynamic parameters (displacements, accelerations, and internal stresses) and in the distribution of damping energy among the components of the system. The evaluation of the energy dissipated in the absorbing boundaries has indicated that the algorithms retained for reducing the coordinates in real and complex space conveniently model the conditions at the limits of the structure. The high degree of numerical stability and the efficiency of the interative procedure of Ibrahimbegovic and Wilson (1989), applied to systems with a large number of degrees of freedom, has been confirmed. 10 refs., 8 figs
Research on criticality analysis method of CNC machine tools components under fault rate correlation
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Bulk hydrogen analysis, using neutrons. Final report of the second research co-ordination meeting
1998-11-01
The aims of the Second Co-ordination Meting (RCM) of the Coordinated Research Programme (CRP) were to report on and review progress against the work programme set at the beginning of the CRP and to discuss the work plans for the second half of the programme. In many cases hydrogen is required to be measured in a bulk medium rather than merely at a surface. For this reason neutrons are used due to their high penetrating power in dense material. In addition, the mass attenuation coefficient for neutrons in hydrogen is significantly larger than for all other elements, meaning that neutrons have a higher probability of interacting with hydrogen than with other elements in the sample matrix. Neutrons have been used in the following areas: Fast Neutron Transmission, Scattering and Activation Technique; Digital Neutron Imaging; Hydrogen Detection by Epithermal Neutrons; Microscopic Behaviour of Hydrogen in Bulk Materials
Puntus, L.; Zolin, V.; Kudryashova, V.
2004-01-01
The investigation of IR spectra of salts of six isomers of pyridinedicarboxylic acid (PDA): 2,3-, 2,4-, 2,5-, 2,6-, 3,4- and 3,5-pyridinedicarboxylic acids, have demonstrated that properties of these salts are dependent on the bonding manner of carboxylate groups and on coordination of heterocyclic nitrogen atom. The most prominent differences in properties and spectra of 2,6- and 3,4-PDA salts are conditioned correspondingly by monodentate and bidentate coordination functions of the carboxylate groups in these compounds. The correlation of the breathing vibration frequency, reflecting the rigidity of the heterocyclic ring, with position of the carboxylate substituents, conditioning intramolecular charge transfer (CT), was postulated and proved by shifts of the breathing vibration frequency dependent on the structure of isomeric ligand
Bona, Carles; Lehner, Luis; Palenzuela-Luque, Carlos
2005-01-01
We study the implications of adopting hyperbolic-driver coordinate conditions motivated by geometrical considerations. In particular, conditions that minimize the rate of change of the metric variables. We analyze the properties of the resulting system of equations and their effect when implementing excision techniques. We find that commonly used coordinate conditions lead to a characteristic structure at the excision surface where some modes are not of outflow type with respect to any excision boundary chosen inside the horizon. Thus, boundary conditions are required for these modes. Unfortunately, the specification of these conditions is a delicate issue as the outflow modes involve both gauge and main variables. As an alternative to these driver equations, we examine conditions derived from extremizing a scalar constructed from Killing's equation and present specific numerical examples
2008-07-07
analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates
2010-01-01
Abstract This paper addresses the process of event handling and rescheduling in manufacturing practice. Firms are confronted with many diverse events, like new or changed orders, machine breakdowns, and material shortages. These events influence the feasibility and optimality of schedules, and thus induce rescheduling. In many manufacturing firms, schedules are created by several human planners. Coordination between them is needed to respond to events adequately. In this paper,...
2007-03-01
The initiation of the Coordinated Research Project (CRP) on Development and Validation of Speciation Analysis using Nuclear Techniques resulted from the recognition that knowledge of total element concentration does not provide adequate information to understand the effects of trace and heavy metals observed in the environment and in living systems. Their toxicity, bioavailability, physiological and metabolic processes, mobility and distribution are greatly dependent on the specific chemical form of the element. Speciation analysis has yet to be developed to its full potential for biochemical, clinical and environmental investigations and still more work is needed in the near future. Seven participants from seven countries participated in this CRP covering a range of analytical techniques including GC, HPLC, AAS, and ICP-MS. The first Research Coordination Meeting (RCM) of the Coordinated Research Project on the Development and Validation of Speciation Analysis using Nuclear Techniques was held at the Reactor Centre of the Jozef Stefan Institute, Podgorica (near Ljubljana), Slovenia, 20-23 June 2001. The second RCM was held at the Technical University of Vienna, 18-22 November 2002, where, in addition to the participants, two external researchers could present their views and experience in the field. The last RCM was held in Vienna, 26-29 April 2004, and this publication is a summary of the results achieved and presented at this last RCM. The participants have developed several new procedures for the reliable analysis of As, Cr, and Se species, mainly in aquatic media. A new instrument was designed and several recommendations for speciation analysis were issued. It is hoped that this publication will make a contribution to enhancing the awareness of the importance of speciation analysis and add to the reliability of speciation results in Member State laboratories
[Health projects managed by Nursing Coordinators: an analysis of contents and degree of success].
Palese, Alvisa; Bresciani, Federica; Brutti, Caterina; Chiari, Ileana; Fontana, Luciana; Fronza, Ornella; Gasperi, Giuseppina; Gheno, Oscar; Guarese, Olga; Leali, Anna; Mansueti, Nadia; Masieri, Enrico; Messina, Laura; Munaretto, Gabriella; Paoli, Claudia; Perusi, Chiara; Randon, Giulia; Rossi, Gloria; Solazzo, Pasquale; Telli, Debora; Trenti, Giuliano; Veronese, Elisabetta; Saiani, Luisa
2012-01-01
To describe the evolution and results of health projects run in hospitals and managed by Nursing Coordinators. A convenience sample of 13 north Italian hospital, and a sample of 56 Nursing Coordinators with a permanent position from at least 1 year, was contacted. The following information was collected with a structured interview: projects run in 2009, topic, if bottom up or top down, number of staff involved and state (ended, still running, stopped). In 2009 Nursing Coordinators started 114 projects (mean 1.8±1.2 each): 94 (82.5%) were improvement projects, 17 (14.9%) accreditation, and 3 (2.6%) research. The projects involved 2.732 staff members (73.7%; average commitment 84 hours); 55 (48.2%) projects were still running, 52 (45.6%) completed, for 5 (4.4%) there was no assessment and 2 (1.8%) had been stopped. Nurses are regularly involved in several projects. A systematic monitoring of the results obtained and stabilization strategies are scarce. Due to the large number of resources invested, a correct management and the choice of areas relevant for patients' problems and needs are pivotal.
Coordinated Control Design for the HTR-PM Plant: From Theoretic Analysis to Simulation Verification
Dong Zhe; Huang Xiaojin
2014-01-01
HTR-PM plant is a two-modular nuclear power plant based on pebble bed modular high temperature gas-cooled reactor (MHTGR), and adopts operation scheme of two nuclear steam supplying systems (NSSSs) driving one turbine. Here, an NSSS is composed of an MHTGR, a once-through steam generator (OTSG) and some connecting pipes. Due to the coupling effect induced by two NSSSs driving one common turbine and that between the MHTGR and OTSG given by common helium flow, it is necessary to design a coordinated control for the safe, stable and efficient operation of the HTR-PM plant. In this paper, the design of the feedback loops and control algorithms of the coordinated plant control law is firstly given. Then, the hardware-in-loop (HIL) system for verifying the feasibility and performance of this control strategy is introduced. Finally, some HIL simulation results are given, which preliminarily show that this coordinated control law can be implemented practically. (author)
Y. Cao
2017-09-01
Full Text Available Most atmospheric models, including the Weather Research and Forecasting (WRF model, use a spherical geographic coordinate system to internally represent input data and perform computations. However, most geographic information system (GIS input data used by the models are based on a spheroid datum because it better represents the actual geometry of the earth. WRF and other atmospheric models use these GIS input layers as if they were in a spherical coordinate system without accounting for the difference in datum. When GIS layers are not properly reprojected, latitudinal errors of up to 21 km in the midlatitudes are introduced. Recent studies have suggested that for very high-resolution applications, the difference in datum in the GIS input data (e.g., terrain land use, orography should be taken into account. However, the magnitude of errors introduced by the difference in coordinate systems remains unclear. This research quantifies the effect of using a spherical vs. a spheroid datum for the input GIS layers used by WRF to study greenhouse gas transport and dispersion in northeast Pennsylvania.
Chen, Shuming; Wang, Dengfeng; Liu, Bo
This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.
S. Prabhu
2014-06-01
Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.
Bridge Diagnosis by Using Nonlinear Independent Component Analysis and Displacement Analysis
Zheng, Juanqing; Yeh, Yichun; Ogai, Harutoshi
A daily diagnosis system for bridge monitoring and maintenance is developed based on wireless sensors, signal processing, structure analysis, and displacement analysis. The vibration acceleration data of a bridge are firstly collected through the wireless sensor network by exerting. Nonlinear independent component analysis (ICA) and spectral analysis are used to extract the vibration frequencies of the bridge. After that, through a band pass filter and Simpson's rule the vibration displacement is calculated and the vibration model is obtained to diagnose the bridge. Since linear ICA algorithms work efficiently only in linear mixing environments, a nonlinear ICA model, which is more complicated, is more practical for bridge diagnosis systems. In this paper, we firstly use the post nonlinear method to change the signal data, after that perform linear separation by FastICA, and calculate the vibration displacement of the bridge. The processed data can be used to understand phenomena like corrosion and crack, and evaluate the health condition of the bridge. We apply this system to Nakajima Bridge in Yahata, Kitakyushu, Japan.
Abriola, D.; Vickridge, I.
2009-12-01
Highlights of the third and last Research Coordination Meeting are given with respect to the progress achieved in the Co-ordinated Research Project on Development of a Reference Database for Ion Beam Analysis. The meeting took place at the IAEA headquarters in Vienna from 27 March to 3 April 2009. Participants presented the results of their work and identified and assigned key tasks in pursuance of the final output of the CRP, in particular the update of the IBANDL library and the drafting of the final Technical Report of the CRP. In addition, a number of productive discussions took place concerning issues such as measurements, assessments, evaluations, benchmarks and recommendations. The main conclusions as well as lists of responsibilities and tasks towards the production of the final report are presented. (author)
Vickridge, Ian; Schwerer, Otto
2007-07-01
Highlights of the 2nd Research Coordination Meeting (RCM) are given with respect to the progress achieved in the first 1 1/2 years of the Co-ordinated Research Project (CRP) on Development of a Reference Database for Ion Beam Analysis. Participants presented the results of their work to date, and identified and assigned key tasks required to ensure that the final output of the CRP is achieved. In addition, a number of lively and productive discussions took place concerning technical issues such as accelerator energy calibration, error reporting, accuracy of the existing IBANDL and EXFOR datasets for IBA, and procedures for producing recommended cross-section data. The main conclusions as well as lists of actions and tasks are presented in this report. (author)
Cancer care coordinators in stage III colon cancer: a cost-utility analysis.
Blakely, Tony; Collinson, Lucie; Kvizhinadze, Giorgi; Nair, Nisha; Foster, Rachel; Dennett, Elizabeth; Sarfati, Diana
2015-08-05
There is momentum internationally to improve coordination of complex care pathways. Robust evaluations of such interventions are scarce. This paper evaluates the cost-utility of cancer care coordinators for stage III colon cancer patients, who generally require surgery followed by chemotherapy. We compared a hospital-based nurse cancer care coordinator (CCC) with 'business-as-usual' (no dedicated coordination service) in stage III colon cancer patients in New Zealand. A discrete event microsimulation model was constructed to estimate quality-adjusted life-years (QALYs) and costs from a health system perspective. We used New Zealand data on colon cancer incidence, survival, and mortality as baseline input parameters for the model. We specified intervention input parameters using available literature and expert estimates. For example, that a CCC would improve the coverage of chemotherapy by 33% (ranging from 9 to 65%), reduce the time to surgery by 20% (3 to 48%), reduce the time to chemotherapy by 20% (3 to 48%), and reduce patient anxiety (reduction in disability weight of 33%, ranging from 0 to 55%). Much of the direct cost of a nurse CCC was balanced by savings in business-as-usual care coordination. Much of the health gain was through increased coverage of chemotherapy with a CCC (especially older patients), and reduced time to chemotherapy. Compared to 'business-as-usual', the cost per QALY of the CCC programme was $NZ 18,900 (≈ $US 15,600; 95% UI: $NZ 13,400 to 24,600). By age, the CCC intervention was more cost-effective for colon cancer patients costs, meaning the cost-effectiveness was roughly comparable between ethnic groups. Such a nurse-led CCC intervention in New Zealand has acceptable cost-effectiveness for stage III colon cancer, meaning it probably merits funding. Each CCC programme will differ in its likely health gains and costs, making generalisation from this evaluation to other CCC interventions difficult. However, this evaluation suggests
S. Mahmoudishadi
2017-09-01
Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Mahmoudishadi, S.; Malian, A.; Hosseinali, F.
2017-09-01
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...
Polder, G.; Heijden, van der G.W.A.M.
2003-01-01
Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components
A Note on McDonald's Generalization of Principal Components Analysis
Shine, Lester C., II
1972-01-01
It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…
Mellard, Daryl F.; Anthony, Jason L.; Woods, Kari L.
2012-01-01
This study extends the literature on the component skills involved in oral reading fluency. Dominance analysis was applied to assess the relative importance of seven reading-related component skills in the prediction of the oral reading fluency of 272 adult literacy learners. The best predictors of oral reading fluency when text difficulty was…
Zlatkina, O. Yu
2018-04-01
There is a relationship between the service properties of component parts and their geometry; therefore, to predict and control the operational characteristics of parts and machines, it is necessary to measure their geometrical specifications. In modern production, a coordinate measuring machine is the advanced measuring instrument of the products geometrical specifications. The analysis of publications has shown that during the coordinate measurements the problems of choosing locating chart of parts and coordination have not been sufficiently studied. A special role in the coordination of the part is played by the coordinate axes informational content. Informational content is the sum of the degrees of freedom limited by the elementary item of a part. The coordinate planes of a rectangular coordinate system have different informational content (three, two, and one). The coordinate axes have informational content of four, two and zero. The higher the informational content of the coordinate plane or axis, the higher its priority for reading angular and linear coordinates is. The geometrical model production of the coordinate measurements object taking into account the information content of coordinate planes and coordinate axes allows us to clearly reveal the interrelationship of the coordinates of the deviations in location, sizes and deviations of their surfaces shape. The geometrical model helps to select the optimal locating chart of parts for bringing the machine coordinate system to the part coordinate system. The article presents an algorithm the model production of geometrical specifications using the example of the piston rod of a compressor.
Anna Maria Stellacci
2012-07-01
Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined
Rajagopal, K. R.
1992-01-01
The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.
Comparative Analysis of Indicators of Coordination Abilities Development in 5th-7th Graders
В. В. Приходько
2017-09-01
Full Text Available The objective of the research is to determine the regularities of coordination abilities development in 5th-7th-grade boys. Materials and methods. The participants in the research were boys of the 5th grade (n = 21, 6th grade (n = 20, 7th grade (n = 19. To achieve the tasks outlined, the research used the following methods: analysis of scientific and methodological literature; pedagogical testing, pedagogical observation, methods of mathematical statistics. Research results. The 5th-6th-grade boys show a statistically significant difference between their results in the following tests: “Standing long jump (cm” (p < 0.002; “Six standing accuracy ball handlings to a partner from a 7 m distance using one of the techniques learned” (p < 0.049; “Rhythmic hand tapping” (p < 0.044; “Rhythmic movements of upper and lower limbs” (p < 0.042 (p < 0.05; “Height (cm”; “Body weight (kg”. The 6th-7th-grade boys — “30 m running (s”; “Standing long jump (cm”; “Sit-ups in 30 seconds”; “Evaluation of static equilibrium by E. Ya. Bondarevsky’s method”; “Evaluation of dynamic equilibrium by the BESS method”; “Rhythmic hand tapping”; “Rhythmic movements of upper and lower limbs”; “Shuttle run (4 × 9 m”; “Tossing rings over a peg”. The 5th-7th-grade boys — “Standing long jump (cm”; “Pull-ups (number of times”; “Evaluation of the ability to differentiate movement speed (accuracy in reproduction of running speed, 90% of maximum”; “Evaluation of static equilibrium by E. Ya. Bondarevsky’s method”; “Evaluation of dynamic equilibrium by the BESS method”; “Rhythmic hand tapping”; “Shuttle run (4 × 9 m”; “Height (cm”; “Body weight (kg”. Conclusions. The research has observed a positive dynamics of the results in the following group of tests: “Standing long jump” by 8.4%, “Rhythmic hand tapping and rhythmic movements of upper and lower
Basta, C.; Olive, W.J.; Antunes, J.S.
1990-01-01
An analysis of cost for each components of Small Hydroelectric Power Plant, taking into account the real costs of these projects is shown. It also presents a global equation which allows a preliminary estimation of cost for each construction. (author)
Malmquist, Linus M.V.; Olsen, Rasmus R.; Hansen, Asger B.
2007-01-01
weathering state and to distinguish between various weathering processes is investigated and discussed. The method is based on comprehensive and objective chromatographic data processing followed by principal component analysis (PCA) of concatenated sections of gas chromatography–mass spectrometry...
Northeast Puerto Rico and Culebra Island Principle Component Analysis - NOAA TIFF Image
National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...
Jesse, Stephen; Kalinin, Sergei V
2009-01-01
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
Nigran, K.S.; Barber, D.C.
1985-01-01
A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)
Ida Vajčnerová
2016-01-01
Full Text Available The objective of the paper is to explore possibilities of evaluating the quality of a tourist destination by means of the principal components analysis (PCA and the cluster analysis. In the paper both types of analysis are compared on the basis of the results they provide. The aim is to identify advantage and limits of both methods and provide methodological suggestion for their further use in the tourism research. The analyses is based on the primary data from the customers’ satisfaction survey with the key quality factors of a destination. As output of the two statistical methods is creation of groups or cluster of quality factors that are similar in terms of respondents’ evaluations, in order to facilitate the evaluation of the quality of tourist destinations. Results shows the possibility to use both tested methods. The paper is elaborated in the frame of wider research project aimed to develop a methodology for the quality evaluation of tourist destinations, especially in the context of customer satisfaction and loyalty.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2014-01-01
The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.
Tripathy, Manoj
2012-01-01
This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...
Sensitivity analysis on the component cooling system of the Angra 1 NPP
Castro Silva, Luiz Euripedes Massiere de
1995-01-01
The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs
Geroukis, Asterios; Brorson, Erik
2014-01-01
In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio – using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...
Root cause analysis in support of reliability enhancement of engineering components
Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.
2014-01-01
Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)
Time-domain ultra-wideband radar, sensor and components theory, analysis and design
Nguyen, Cam
2014-01-01
This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...
Guruswamy, G. P.; Goorjian, P. M.
1984-01-01
An efficient coordinate transformation technique is presented for constructing grids for unsteady, transonic aerodynamic computations for delta-type wings. The original shearing transformation yielded computations that were numerically unstable and this paper discusses the sources of those instabilities. The new shearing transformation yields computations that are stable, fast, and accurate. Comparisons of those two methods are shown for the flow over the F5 wing that demonstrate the new stability. Also, comparisons are made with experimental data that demonstrate the accuracy of the new method. The computations were made by using a time-accurate, finite-difference, alternating-direction-implicit (ADI) algorithm for the transonic small-disturbance potential equation.
Analysis of the morphology of oral structures from 3-D co-ordinate data.
Jovanovski, V; Lynch, E
2000-01-01
A non-intrusive method is described which can be used to determine the forms of oral structures. It is based on the digitising of standard replicas with a co-ordinate-measuring machine. Supporting software permits a mathematical model of the surface to be reconstructed and visualised from captured three-dimensional co-ordinates. A series of surface data sets can be superposed into a common reference frame without the use of extrinsic markers, allowing changes in the shapes of oral structures to be quantified accurately over an extended period of time. The system has found numerous applications.
Development of computational methods of design by analysis for pressure vessel components
Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin
2005-01-01
Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)
Glogovac Svetlana
2012-01-01
Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.
The role of damage analysis in the assessment of service-exposed components
Bendick, W.; Muesch, H.; Weber, H.
1987-01-01
Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)
Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.
Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M
2016-10-03
Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.
Weigel, C; Calas, G; Cormier, L; Galoisy, L; Henderson, G S
2008-01-01
High-resolution Al L 2,3 -edge x-ray absorption near edge structure (XANES) spectra have been measured in selected materials containing aluminium in 4-, 5- and 6-coordination. A shift of 1.5 eV is observed between the onset of [4] Al and [6] Al L 2,3 -edge XANES, in agreement with the magnitude of the shift observed at the Al K-edge. The differences in the position and shape of low-energy components of Al L 2,3 -edge XANES spectra provide a unique fingerprint of the geometry of the Al site and of the nature of Al-O chemical bond. The high resolution allows the calculation of electronic parameters such as the spin-orbit coupling and exchange energy using intermediate coupling theory. The electron-hole exchange energy decreases in tetrahedral as compared to octahedral symmetry, in relation with the increased screening of the core hole in the former. Al L 2,3 -edge XANES spectra confirm a major structural difference between glassy and crystalline NaAlSi 2 O 6 , with Al in 4- and 6-coordination, respectively, Al coordination remaining unchanged in NaAl 1-x Fe x Si 2 O 6 glasses, as Fe is substituted for Al
Reliability analysis and component functional allocations for the ESF multi-loop controller design
Hur, Seop; Kim, D.H.; Choi, J.K.; Park, J.C.; Seong, S.H.; Lee, D.Y.
2006-01-01
This paper deals with the reliability analysis and component functional allocations to ensure the enhanced system reliability and availability. In the Engineered Safety Features, functionally dependent components are controlled by a multi-loop controller. The system reliability of the Engineered Safety Features-Component Control System, especially, the multi-loop controller which is changed comparing to the conventional controllers is an important factor for the Probability Safety Assessment in the nuclear field. To evaluate the multi-loop controller's failure rate of the k-out-of-m redundant system, the binomial process is used. In addition, the component functional allocation is performed to tolerate a single multi-loop controller failure without the loss of vital operation within the constraints of the piping and component configuration, and ensure that mechanically redundant components remain functional. (author)
Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence
Liying Wang
2017-01-01
Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.
Morillo, Juan P; Reigal, Rafael E; Hernández-Mendo, Antonio; Montaña, Alejandro; Morales-Sánchez, Verónica
2017-01-01
Referees are essential for sports such as handball. However, there are few tools available to analyze the activity of handball referees. The aim of this study was to design an instrument for observing the behavior of referees in handball competitions and to analyze the resulting data by polar coordinate analysis. The instrument contained 6 criteria and 18 categories and can be used to monitor and describe the actions of handball referees according to their role/position on the playing court. For the data quality control analysis, we calculated Pearson's (0.99), Spearman's (0.99), and Tau Kendall's (1.00) correlation coefficients and Cohen's kappa (entre 0.72 y 0.75) and Phi (entre 0.83 y 0.87) coefficients. In the generalizability analysis, the absolute and relative generalizability coefficients were 0.99 in both cases. Polar coordinate analysis of referee decisions showed that correct calls were more common for central court and 7-meter throw calls. Likewise, calls were more likely to be incorrect (in terms of both errors of omission and commission) when taken from the goal-line position.
Juan P. Morillo
2017-10-01
Full Text Available Referees are essential for sports such as handball. However, there are few tools available to analyze the activity of handball referees. The aim of this study was to design an instrument for observing the behavior of referees in handball competitions and to analyze the resulting data by polar coordinate analysis. The instrument contained 6 criteria and 18 categories and can be used to monitor and describe the actions of handball referees according to their role/position on the playing court. For the data quality control analysis, we calculated Pearson's (0.99, Spearman's (0.99, and Tau Kendall's (1.00 correlation coefficients and Cohen's kappa (entre 0.72 y 0.75 and Phi (entre 0.83 y 0.87 coefficients. In the generalizability analysis, the absolute and relative generalizability coefficients were 0.99 in both cases. Polar coordinate analysis of referee decisions showed that correct calls were more common for central court and 7-meter throw calls. Likewise, calls were more likely to be incorrect (in terms of both errors of omission and commission when taken from the goal-line position.
Failure trend analysis for safety related components of Korean standard NPPs
Choi, Sun Yeong; Han, Sang Hoon
2005-01-01
The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc
Rafina Destiarti Ainul
2016-08-01
Full Text Available Deploying femtocells that have low power level in LTE with small coverage area is an alternative solution for mobile operators to improve indoors network coverage area as well as system capacity. However deploying femtocells (HeNB that were used co-channel frequency, can be brought about interference problem to the Macro BTS (eNB. Close Subscriber Group (CSG of HeNB allows only User equipment (UE to access HeNB. HeNB is the source of interference for UE who cannot access it. Therefore it is necessary for interference coordination methods among the HeNB and eNB. The methods are ICIC (Intercell Interference Coordination and eICIC (enhanced Intercell Interference Coordination. This paper proposed performance analysis of scheduling schemes for Femto to macro interference coordination that allocated resource in the frequency and time domain using LTE-Femtocell suburban and urban deployment scenario. Simulation result using ICIC methods can improve SINR performance 15.77 % in urban and 28.66 % in suburban, throughput performance 10.11 % in urban and 21.05 % in suburban. eICIC methods can improve SINR performance 17.44 % in urban and 31.14 % in suburban, throughput performance 19.83% in urban and 44.39 % in suburban.The result prove using eICIC method in time domain resource have better performance than using ICIC method in frequency resource. However using eICIC method in suburban deployment scenariocan increase the performance of SINR and throughput more effective than using eICIC method in urban deployment scenario.
Mathematical analysis and coordinated current allocation control in battery power module systems
Han, Weiji; Zhang, Liang
2017-12-01
As the major energy storage device and power supply source in numerous energy applications, such as solar panels, wind plants, and electric vehicles, battery systems often face the issue of charge imbalance among battery cells/modules, which can accelerate battery degradation, cause more energy loss, and even incur fire hazard. To tackle this issue, various circuit designs have been developed to enable charge equalization among battery cells/modules. Recently, the battery power module (BPM) design has emerged to be one of the promising solutions for its capability of independent control of individual battery cells/modules. In this paper, we propose a new current allocation method based on charging/discharging space (CDS) for performance control in BPM systems. Based on the proposed method, the properties of CDS-based current allocation with constant parameters are analyzed. Then, real-time external total power requirement is taken into account and an algorithm is developed for coordinated system performance control. By choosing appropriate control parameters, the desired system performance can be achieved by coordinating the module charge balance and total power efficiency. Besides, the proposed algorithm has complete analytical solutions, and thus is very computationally efficient. Finally, the efficacy of the proposed algorithm is demonstrated using simulations.
Matasebia T. Munie
2011-10-01
Full Text Available Supramolecular coordination polymers with wavelike structures have been synthesized by self-assembly and their structures analyzed using the sine trigonometric function. Slow evaporation of a methylene chloride-methanol solution of a 1:1 molar mixture of [M(tmhd2], where M = Co or Ni, and quinoxaline; a 1:2:1 molar mixture of [M(acac2], where M = Co or Ni, 2,2,6,6-tetramethyl-3,5-heptadione and quinoxaline; or a 1:2:1 molar mixture of [Co(acac2], dibenzoylmethane, and quinoxaline, yielded the crystalline coordination polymers. In the presence of the nitrogenous base, ligand scrambling occurs yielding the most insoluble product. The synthesis and structures of the following wavelike polymers are reported: trans-[Co(DBM2(qox]n·nH2O (2, trans-[Co(tmhd2(qox]n (3, trans-[Ni(tmhd2(qox]n (4, where DBM− = dibenzoylmethanate, tmhd− = 2,2,6,6-tetramethyl-3,5-heptadionate, and qox = quinoxaline. The wavelike structures are generated by intramolecular steric interactions and crystal packing forces between the chains. Some of the tert-butyl groups show a two-fold disorder. The sine function, φ = A sin 2πx/λ, where φ = distance (Ǻ along the polymer backbone, λ = wavelength (Ǻ, A = amplitude (Ǻ, x = distance (Ǻ along the polymer axis, provides a method to approximate and visualize the polymer structures.
Supply Chain Contracts in Fashion Department Stores: Coordination and Risk Analysis
Bin Shen
2014-01-01
Full Text Available In the fashion industry, department stores normally trade with suppliers of national brands by markdown contract whilst developing private labels with cooperated designers by profit sharing contract. Motivated by this real industrial practice, we study a single-supplier single-retailer two-echelon fashion supply chain selling a short-life fashion product of either a national brand or a private label. The supplier refers to the national/designer brand owner and the retailer refers to the department store. We investigate the supply chain coordination issue and examine the supply chain agents’ performances under the mentioned two contracts. We find the analytical evidence that there is a similar relative risk performance but different absolute risk performances between the national brand and the private label. This finding provides an important implication in strategic interaction for the risk-averse department stores in product assortment and brand management. Furthermore, we explore the impact of sales effort on the supply chain system and find that the supply chain is able to achieve coordination if and only if the supplier (i.e., the national brand or the private label is willing to share the cost of the sales effort.
Singh, S; Gupta, R
2012-06-01
To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.
Han-Jui Lee
Full Text Available Current time-density curve analysis of digital subtraction angiography (DSA provides intravascular flow information but requires manual vasculature selection. We developed an angiographic marker that represents cerebral perfusion by using automatic independent component analysis.We retrospectively analyzed the data of 44 patients with unilateral carotid stenosis higher than 70% according to North American Symptomatic Carotid Endarterectomy Trial criteria. For all patients, magnetic resonance perfusion (MRP was performed one day before DSA. Fixed contrast injection protocols and DSA acquisition parameters were used before stenting. The cerebral circulation time (CCT was defined as the difference in the time to peak between the parietal vein and cavernous internal carotid artery in a lateral angiogram. Both anterior-posterior and lateral DSA views were processed using independent component analysis, and the capillary angiogram was extracted automatically. The full width at half maximum of the time-density curve in the capillary phase in the anterior-posterior and lateral DSA views was defined as the angiographic mean transient time (aMTT; i.e., aMTTAP and aMTTLat. The correlations between the degree of stenosis, CCT, aMTTAP and aMTTLat, and MRP parameters were evaluated.The degree of stenosis showed no correlation with CCT, aMTTAP, aMTTLat, or any MRP parameter. CCT showed a strong correlation with aMTTAP (r = 0.67 and aMTTLat (r = 0.72. Among the MRP parameters, CCT showed only a moderate correlation with MTT (r = 0.67 and Tmax (r = 0.40. aMTTAP showed a moderate correlation with Tmax (r = 0.42 and a strong correlation with MTT (r = 0.77. aMTTLat also showed similar correlations with Tmax (r = 0.59 and MTT (r = 0.73.Apart from vascular anatomy, aMTT estimates brain parenchyma hemodynamics from DSA and is concordant with MRP. This process is completely automatic and provides immediate measurement of quantitative peritherapeutic brain parenchyma
Lu, Wei-Zhen; He, Hong-Di; Dong, Li-yun
2011-01-01
This study aims to evaluate the performance of two statistical methods, principal component analysis and cluster analysis, for the management of air quality monitoring network of Hong Kong and the reduction of associated expenses. The specific objectives include: (i) to identify city areas with similar air pollution behavior; and (ii) to locate emission sources. The statistical methods were applied to the mass concentrations of sulphur dioxide (SO 2 ), respirable suspended particulates (RSP) and nitrogen dioxide (NO 2 ), collected in monitoring network of Hong Kong from January 2001 to December 2007. The results demonstrate that, for each pollutant, the monitoring stations are grouped into different classes based on their air pollution behaviors. The monitoring stations located in nearby area are characterized by the same specific air pollution characteristics and suggested with an effective management of air quality monitoring system. The redundant equipments should be transferred to other monitoring stations for allowing further enlargement of the monitored area. Additionally, the existence of different air pollution behaviors in the monitoring network is explained by the variability of wind directions across the region. The results imply that the air quality problem in Hong Kong is not only a local problem mainly from street-level pollutions, but also a region problem from the Pearl River Delta region. (author)
1998-01-01
The Coordinated research program on Intercomparison of analysis methods for seismically isolated nuclear structures involved participants from Italy, Japan, Republic of Korea, Russia, United Kingdom, USA, EC. The purpose of the meeting was to review the progress on the finite element prediction of the force-deformation behaviour of seismic isolators and to discuss the first set of analytical results for the prediction of the response of base-oscillated structures to earthquake inputs. The intercomparison of predictions of bearing behaviour has identified important unexpected issues requiring deeper investigation
Reliability analysis of nuclear component cooling water system using semi-Markov process model
Veeramany, Arun; Pandey, Mahesh D.
2011-01-01
Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.
Keylock, Christopher J [Sheffield Fluid Mechanics Group and Department of Civil and Structural Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD (United Kingdom); Nishimura, Kouichi, E-mail: c.keylock@sheffield.ac.uk [Graduate School of Environmental Studies, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)
2016-04-15
Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements. (paper)
Keylock, Christopher J; Nishimura, Kouichi
2016-01-01
Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements. (paper)
Young, Cole; Reinkensmeyer, David J
2014-08-01
Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization. Copyright © 2014 Elsevier B.V. All rights reserved.
Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei
2017-09-11
Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.
Li, Xian-Ying; Hu, Shi-Min
2013-02-01
Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.
MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET
Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.
2012-01-01
We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.
Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation
Chunbo Yang
2015-01-01
Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.
Wang, Yuanye; Pedersen, Klaus I.
2012-01-01
The performance of enhanced Inter-Cell Interference Coordination (eICIC) for Long Term Evolution (LTE)- Advanced with co-channel deployment of both macro and pico is analyzed. The use of pico-cell Range Extension (RE) and time domain eICIC (TDM muting) is combined. The performance is evaluated...... in the downlink by means of extensive system level simulations that follow the 3GPP guidelines. The overall network performance is analyzed for different number of pico-eNBs, transmit power levels, User Equipment (UE) distributions, and packet schedulers. Recommended settings of the RE offset and TDM muting ratio...... in different scenarios are identified. The presented performance results and findings can serve as input to guidelines for co-channel deployment of macro and pico-eNBs with eICIC....
Dynamic analysis of the tether transportation system using absolute nodal coordinate formulation
Sun, Xin; Xu, Ming; Zhong, Rui
2017-10-01
Long space tethers are becoming a rising concern as an alternate way for transportation in space. It benefits from fuel economizing. This paper focuses on the dynamics of the tether transportation system, which consists of two end satellites connected by a flexible tether, and a movable vehicle driven by the actuator carried by itself. The Absolute Nodal Coordinate Formulation is applied to the establishment of the equation of motion, so that the influence caused by the distributed mass and elasticity of the tether is introduced. Moreover, an approximated method for accelerating the calculation of the generalized gravitational forces on the tether is proposed by substituting the volume integral every step into summation of finite terms. Afterwards, dynamic evolutions of such a system in different configurations are illustrated using numerical simulations. The deflection of the tether and the trajectory of the crawler during the transportation is investigated. Finally, the effect on the orbit of the system due to the crawler is revealed.
Xiulong Chen
2012-10-01
Full Text Available In order to study the elastodynamic behaviour of 4- universal joints- prismatic pairs- spherical joints / universal joints- prismatic pairs- universal joints 4-UPS-UPU high-speed spatial PCMMs(parallel coordinate measuring machines, the nonlinear time-varying dynamics model, which comprehensively considers geometric nonlinearity and the rigid-flexible coupling effect, is derived by using Lagrange equations and finite element methods. Based on the Newmark method, the kinematics output response of 4-UPS-UPU PCMMs is illustrated through numerical simulation. The results of the simulation show that the flexibility of the links is demonstrated to have a significant impact on the system dynamics response. This research can provide the important theoretical base of the optimization design and vibration control for 4-UPS-UPU PCMMs.
Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov
2012-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest
Yung-Kun Chuang
2014-09-01
Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.
The derivative assay--an analysis of two fast components of DNA rejoining kinetics
Sandstroem, B.E.
1989-01-01
The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively
Tian, Fang; Rades, Thomas; Sandler, Niklas
2008-01-01
The purpose of this research is to gain a greater insight into the hydrate formation processes of different carbamazepine (CBZ) anhydrate forms in aqueous suspension, where principal component analysis (PCA) was applied for data analysis. The capability of PCA to visualize and to reveal simplified...
Khodasevich, M. A.; Sinitsyn, G. V.; Gres'ko, M. A.; Dolya, V. M.; Rogovaya, M. V.; Kazberuk, A. V.
2017-07-01
A study of 153 brands of commercial vodka products showed that counterfeit samples could be identified by introducing a unified additive at the minimum concentration acceptable for instrumental detection and multivariate analysis of UV-Vis transmission spectra. Counterfeit products were detected with 100% probability by using hierarchical cluster analysis or the C-means method in two-dimensional principal-component space.
A review of the reliability analysis of LPRS including the components repairs
Oliveira, L.F.S. de; Fleming, P.V.; Frutuoso e Melo, P.F.F.; Tayt-Sohn, L.C.
1983-01-01
The reliability analysis of low pressure recirculation system in its long-term recicurlation phase before 24hs is presented. The possibility of repairing the components out of the containment is included. A general revision of analysis of the short-term recirculation phase is done. (author) [pt
Priority of VHS Development Based in Potential Area using Principal Component Analysis
Meirawan, D.; Ana, A.; Saripudin, S.
2018-02-01
The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Sanborn, Graham G.; Shabana, Ahmed A.
2009-01-01
For almost a decade, the finite element absolute nodal coordinate formulation (ANCF) has been used for both geometry and finite element representations. Because of the ANCF isoparametric property in the cases of beams, plates and shells, ANCF finite elements lend themselves easily to the geometric description of curves and surfaces, as demonstrated in the literature. The ANCF finite elements, therefore, are ideal for what is called isogeometric analysis that aims at the integration ofcomputer aided designandanalysis (ICADA), which involves the integration of what is now split into the separate fields of computer aided design (CAD) and computer aided analysis (CAA). The purpose of this investigation is to establish the relationship between the B-spline and NURBS, which are widely used in the geometric modeling, and the ANCF finite elements. It is shown in this study that by using the ANCF finite elements, one can in a straightforward manner obtain the control point representation required for the Bezier, B-spline and NURBS geometry. To this end, a coordinate transformation is used to write the ANCF gradient vectors in terms of control points. Unifying the CAD and CAA will require the use of such coordinate transformations and their inverses in order to transform control points to position vector gradients which are required for the formulation of the element transformations in the case of discontinuities as well as the formulation of the strain measures and the stress forces based on general continuum mechanics theory. In particular, fully parameterized ANCF finite elements can be very powerful in describing curve, surface, and volume geometry, and they can be effectively used to describe discontinuities while maintaining the many ANCF desirable features that include a constant mass matrix, zero Coriolis and centrifugal forces, no restriction on the amount of rotation or deformation within the finite element, ability for straightforward implementation of general
Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis
Xiaoming Xu
2017-01-01
Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.
Path and correlation analysis of perennial ryegrass (Lolium perenne L.) seed yield components
Abel, Simon; Gislum, René; Boelt, Birte
2017-01-01
Maximum perennial ryegrass seed production potential is substantially greater than harvested yields with harvested yields representing only 20% of calculated potential. Similar to wheat, maize and other agriculturally important crops, seed yield is highly dependent on a number of interacting seed...... yield components. This research was performed to apply and describe path analysis of perennial ryegrass seed yield components in relation to harvested seed yields. Utilising extensive yield components which included subdividing reproductive inflorescences into five size categories, path analysis...... was undertaken assuming a unidirectional causal-admissible relationship between seed yield components and harvested seed yield in six commercial seed production fields. Both spikelets per inflorescence and florets per spikelet had a significant (p seed yield; however, total...
Hongjuan Yu; Jinyun Guo; Jiulong Li; Dapeng Mu; Qiaoli Kong
2015-01-01
Zero drift and solid Earth tide corrections to static relative gravimetric data cannot be ignored. In this paper, a new principal component analysis (PCA) algorithm is presented to extract the zero drift and the solid Earth tide, as signals, from static relative gravimetric data assuming that the components contained in the relative gravimetric data are uncorrelated. Static relative gravity observations from Aug. 15 to Aug. 23, 2014 are used as statistical variables to separate the signal and...
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Jeffery Nick D
2007-09-01
Full Text Available Abstract Background Clinical spinal cord injury in domestic dogs provides a model population in which to test the efficacy of putative therapeutic interventions for human spinal cord injury. To achieve this potential a robust method of functional analysis is required so that statistical comparison of numerical data derived from treated and control animals can be achieved. Results In this study we describe the use of digital motion capture equipment combined with mathematical analysis to derive a simple quantitative parameter – 'the mean diagonal coupling interval' – to describe coordination between forelimb and hindlimb movement. In normal dogs this parameter is independent of size, conformation, speed of walking or gait pattern. We show here that mean diagonal coupling interval is highly sensitive to alterations in forelimb-hindlimb coordination in dogs that have suffered spinal cord injury, and can be accurately quantified, but is unaffected by orthopaedic perturbations of gait. Conclusion Mean diagonal coupling interval is an easily derived, highly robust measurement that provides an ideal method to compare the functional effect of therapeutic interventions after spinal cord injury in quadrupeds.
Hao, Liang; Wu, Dapeng; Guan, Yafeng
2014-09-01
The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
Component analysis and initial validity of the exercise fear avoidance scale.
Wingo, Brooks C; Baskin, Monica; Ard, Jamy D; Evans, Retta; Roy, Jane; Vogtle, Laura; Grimley, Diane; Snyder, Scott
2013-01-01
To develop the Exercise Fear Avoidance Scale (EFAS) to measure fear of exercise-induced discomfort. We conducted principal component analysis to determine component structure and Cronbach's alpha to assess internal consistency of the EFAS. Relationships between EFAS scores, BMI, physical activity, and pain were analyzed using multivariate regression. The best fit was a 3-component structure: weight-specific fears, cardiorespiratory fears, and musculoskeletal fears. Cronbach's alpha for the EFAS was α=.86. EFAS scores significantly predicted BMI, physical activity, and PDI scores. Psychometric properties of this scale suggest it may be useful for tailoring exercise prescriptions to address fear of exercise-related discomfort.
Ya Wang
2016-08-01
Full Text Available Sandy desertification is one of the most severe ecological problems in the world. Essentially, it is land degradation caused by discordance in the Social-Ecological Systems (SES. The ability to coordinate SES is a principal characteristic of regional sustainable development and a key factor in desertification control. This paper directly and comprehensively evaluates the ability to coordinate SES in the desertification reversal process. Assessment indicators and standards for SES have been established using statistical data and materials from government agencies. We applied a coordinated development model based on Identical-Discrepancy-Contrary (IDC situational ranking of a Set Pair Analysis (SPA to analyze the change in Yanchi County’s coordination ability since it implemented the grazing prohibition policy. The results indicated that Yanchi County was basically in the secondary grade of the national sustainable development level, and the subsystems’ development trend was relatively stable. Coordinate ability increased from 0.686 in 2003 to 0.957 in 2014 and experienced “weak coordination to basic coordination to high coordination” development processes. We concluded that drought, the grazing prohibition dilemma and the ecological footprint were key factors impeding the coordination of SES development in this area. These findings should provide information about desertification control and ecological policy implementation to guarantee sustainable rehabilitation.
Radionuclide X-ray fluorescence analysis of components of the environment
Toelgyessy, J.; Havranek, E.; Dejmkova, E.
1983-12-01
The physical foundations and methodology are described of radionuclide X-ray fluorescence analysis. The sources are listed of air, water and soil pollution, and the transfer of impurities into biological materials is described. A detailed description is presented of the sampling of air, soil and biological materials and their preparation for analysis. Greatest attention is devoted to radionuclide X-ray fluorescence analysis of the components of the environment. (ES)
Study on coordination of ventricular contraction by a phase analysis method in tetralogy of Fallot
Chen Xianying; Zhu Hongyu; Li Xinmin; Wang Zhiguo; Zhang Guoxu; Zhang Zhaozhong; Wang Kaigeng
2001-01-01
Objective: Quantitative study on the characters of left ventricular (LV) wall motion and assessing degree of satisfaction of surgical repair of tetralogy of Fallot with phase standard deviation (PSD). Methods: PSD was calculated by equilibrium radionuclide ventriculography in 24 normal controls and 59 patients of tetralogy of Fallot before and after operation. Results: LV PSD was (9.7 +- 2.8) degree in 24 normal controls and (20.5 +- 15.5) degree and (10.0 +- 7.2) degree in 51(86.4%) of 59 patients of tetralogy of Fallot before and after surgical repair, respectively, and the difference was statistically significant (P < 0.01). LV PSD was (11.2 +- 7.8) degree and (21.3 +- 9.3) degree, respectively before and after surgical repair in the remaining 8(13.4%) patients and the LV PSD was increased significantly after operation (P < 0.05). Conclusions: LV PSD is coordinate with improving degree of ventricular wall motion and heart function after surgical repair of tetralogy of Fallot. PSD is one of the heart function parameters for reflecting the degree of satisfaction of the surgical repair
1983-03-01
In March 1981 the systematic measuring of 15 elements of airborne dust was started in the Coordinated Airborne Dust Program (LVPr) by the Association for the Promotion of Radionuclide Technology (AFR). The sampling was done under comparable conditions at five selected places within the Federal Republic of Germany by using especially developed large-filter High Volume Samplers. The aim of this research is to establish the foundation for further investigations on the effects of the current given element concentrations on human life. When the results of the first half-year (summer period) were in hand, these element concentrations, which had been analysed using different methods, were presented to a group of experts, also with the experience gained with the analytical methods, in order to critically assess procedure and philosophy of this study. This evaluation was done on the occasion of a colloquium on Jun 29th, 1982 at the Karlsruhe Nuclear Research Centre. The presented AFR-Report contains the papers and the discussions of this meeting as well as the average element data with respect to the sampling time between 15th and 40th week of the year 1981. The discussion contributions presented here correspond to the essential statements that have been given and recorded. A total classification of all data relating to the whole sampling time of the LVPr will be given in AFR-Report No. 007. (orig.) [de
Sakurada, Takeshi; Ito, Koji; Gomi, Hiroaki
2016-01-01
Although strong motor coordination in intrinsic muscle coordinates has frequently been reported for bimanual movements, coordination in extrinsic visual coordinates is also crucial in various bimanual tasks. To explore the bimanual coordination mechanisms in terms of the frame of reference, here we characterized implicit bilateral interactions in visuomotor tasks. Visual perturbations (finger-cursor gain change) were applied while participants performed a rhythmic tracking task with both index fingers under an in-phase or anti-phase relationship in extrinsic coordinates. When they corrected the right finger's amplitude, the left finger's amplitude unintentionally also changed [motor interference (MI)], despite the instruction to keep its amplitude constant. Notably, we observed two specificities: one was large MI and low relative-phase variability (PV) under the intrinsic in-phase condition, and the other was large MI and high PV under the extrinsic in-phase condition. Additionally, using a multiple-interaction model, we successfully decomposed MI into intrinsic components caused by motor correction and extrinsic components caused by visual-cursor mismatch of the right finger's movements. This analysis revealed that the central nervous system facilitates MI by combining intrinsic and extrinsic components in the condition with in-phases in both intrinsic and extrinsic coordinates, and that under-additivity of the effects is explained by the brain's preference for the intrinsic interaction over extrinsic interaction. In contrast, the PV was significantly correlated with the intrinsic component, suggesting that the intrinsic interaction dominantly contributed to bimanual movement stabilization. The inconsistent features of MI and PV suggest that the central nervous system regulates multiple levels of bilateral interactions for various bimanual tasks. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and
Sahara; Jean L Ndeugueu; Masaru Aniya
2010-01-01
The temperature dependence of the viscosity of trehalose-water-lithium iodide system has been investigated by the mean of the Bond Strength Coordination Number Fluctuation (BSCNF) model. The result indicates that by increasing the trehalose content, maintaining the content of LiI constant, the fragility decreases due to the increase of the connectivity between the structural units. Our analysis suggests also that the fragility of the system is controlled by the amount of water in the composition. By increasing the water content, the total bond strength decreases and its fluctuation increases, resulting in the increase of the fragility. Based on the analysis of the obtained parameters of the BSCNF model, a physical interpretation of the VFT parameters reported in a previous study has been given. (author)
Serrien, Ben; Hohenauer, Erich; Clijsen, Ron; Taube, Wolfgang; Baeyens, Jean-Pierre; Küng, Ursula
2017-11-01
How humans maintain balance and change postural control due to age, injury, immobility or training is one of the basic questions in motor control. One of the problems in understanding postural control is the large set of degrees of freedom in the human motor system. Therefore, a self-organizing map (SOM), a type of artificial neural network, was used in the present study to extract and visualize information about high-dimensional balance strategies before and after a 6-week slackline training intervention. Thirteen subjects performed a flamingo and slackline balance task before and after the training while full body kinematics were measured. Range of motion, velocity and frequency of the center of mass and joint angles from the pelvis, trunk and lower leg (45 variables) were calculated and subsequently analyzed with an SOM. Subjects increased their standing time significantly on the flamingo (average +2.93 s, Cohen's d = 1.04) and slackline (+9.55 s, d = 3.28) tasks, but the effect size was more than three times larger in the slackline. The SOM analysis, followed by a k-means clustering and marginal homogeneity test, showed that the balance coordination pattern was significantly different between pre- and post-test for the slackline task only (χ 2 = 82.247; p balance coordination on the slackline could be characterized by an increase in range of motion and a decrease in velocity and frequency in nearly all degrees of freedom simultaneously. The observation of low transfer of coordination strategies to the flamingo task adds further evidence for the task-specificity principle of balance training, meaning that slackline training alone will be insufficient to increase postural control in other challenging situations.
Tanaka, Hirokazu; Katura, Takusige; Sato, Hiroki
2013-01-01
Reproducibility of experimental results lies at the heart of scientific disciplines. Here we propose a signal processing method that extracts task-related components by maximizing the reproducibility during task periods from neuroimaging data. Unlike hypothesis-driven methods such as general linear models, no specific time courses are presumed, and unlike data-driven approaches such as independent component analysis, no arbitrary interpretation of components is needed. Task-related components are constructed by a linear, weighted sum of multiple time courses, and its weights are optimized so as to maximize inter-block correlations (CorrMax) or covariances (CovMax). Our analysis method is referred to as task-related component analysis (TRCA). The covariance maximization is formulated as a Rayleigh-Ritz eigenvalue problem, and corresponding eigenvectors give candidates of task-related components. In addition, a systematic statistical test based on eigenvalues is proposed, so task-related and -unrelated components are classified objectively and automatically. The proposed test of statistical significance is found to be independent of the degree of autocorrelation in data if the task duration is sufficiently longer than the temporal scale of autocorrelation, so TRCA can be applied to data with autocorrelation without any modification. We demonstrate that simple extensions of TRCA can provide most distinctive signals for two tasks and can integrate multiple modalities of information to remove task-unrelated artifacts. TRCA was successfully applied to synthetic data as well as near-infrared spectroscopy (NIRS) data of finger tapping. There were two statistically significant task-related components; one was a hemodynamic response, and another was a piece-wise linear time course. In summary, we conclude that TRCA has a wide range of applications in multi-channel biophysical and behavioral measurements. Copyright © 2012 Elsevier Inc. All rights reserved.
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2017-07-01
Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.
Hiley, Craig I; Playford, Helen Y; Fisher, Janet M; Felix, Noelia Cortes; Thompsett, David; Kashtiban, Reza J; Walton, Richard I
2018-02-07
Partial substitution of Ce 4+ by Nb 5+ is possible in CeO 2 by coinclusion of Na + to balance the charge, via hydrothermal synthesis in sodium hydroxide solution. Pair distribution function analysis using reverse Monte Carlo refinement reveals that the small pentavalent substituent resides in irregular coordination positions in an average fluorite lattice, displaced away from the ideal cubic coordination toward four oxygens. This results in under-coordinated oxygen, which explains significantly enhanced oxygen storage capacity of the materials of relevance to redox catalysis used in energy and environmental applications.
Khuat Thanh Tung
2016-11-01
Full Text Available Optical Character Recognition plays an important role in data storage and data mining when the number of documents stored as images is increasing. It is expected to find the ways to convert images of typewritten or printed text into machine-encoded text effectively in order to support for the process of information handling effectively. In this paper, therefore, the techniques which are being used to convert image into editable text in the computer such as principal component analysis, multilayer perceptron network, self-organizing maps, and improved multilayer neural network using principal component analysis are experimented. The obtained results indicated the effectiveness and feasibility of the proposed methods.
Suryakant B. Chandgude
2015-09-01
Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.
Hsieh, B.J.
1977-01-01
A rectilinear shell element formulated in the convected (co-rotational) coordinates is used to investigate the effects of edge conditions on the behaviors of thin shells of revolution under suddenly applied uniform loading. The equivalent generalized nodal forces under uniform loading are computed to the third order of the length of each element. A dynamic buckling load is defined as the load at which a great change in the response is observed for a small change in the loading. The problem studied is a shallow spherical cap. The cap is discretized into a finite number of elements. This discretization introduces some initial imperfections into the shell model. Nonetheless, the effect of this artificial imperfection is isolated from the effect of the edge conditions provided the same number of elements is used in all the cases. Four different edge conditions for the cap are used. These boundary conditions are fixed edge, hinged edge, roller edge and free edge. The apex displacement of the cap is taken as the measure for the response of the cap, and the dynamic buckling load is obtained by examining the response of the cap under different levels of loadings. Dynamic buckling loads can be found for all cases but for the free edge case. They are 0.28q for both fixed and hinged cases and 0.13 q for the roller case, where q is the classic static buckling load of a complete spherical shell with the same geometric dimensions and material properties. In the case of free edge, the motions of the cap are composed of mostly rigid body motion and small vibrations. The vibration of the cap is stable up to 1 q loading. The cap does snap through at higher loading. However, no loading can be clearly identified as buckling load
Firestone, R.B.; Trkov, A.
2005-10-01
Potential problems associated with nuclear data for neutron activation analysis were identified, the scope of the work to be undertaken was defined together with its priorities, and tasks were assigned to participants. Data testing and measurements refer to gamma spectrum peak evaluations, detector efficiency calibration, neutron spectrum characteristics and reference materials analysis. (author)
Foch, Eric; Milner, Clare E
2014-01-03
Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.
A novel approach to analyzing fMRI and SNP data via parallel independent component analysis
Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas
2007-03-01
There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.
Fan Wang
2016-12-01
Full Text Available Supply chain sustainability has become significantly important in the fashion industry, and more and more fashion brands have invested in developing sustainable supply chains. We note that dual channel system comprising a brand-owned direct channel and retail outsourcing channel is quite common in the fashion industry, and in the latter, buy-back contract is popular between brands and retailers. Therefore, we build a stylized dual channel model with price competition and demand uncertainty to characterize the main properties of a fashion supply chain. Our foci are the sustainability analysis and the channel coordination mechanism. We first design a buy-back contract with return cost to coordinate the channel. We then study supply chain sustainability and examine the effect of two key influencing factors, i.e., price competition and demand uncertainty. Interestingly, we find that a fiercer price competition will lead to a more sustainable supply chain. From the perspective of supply chain managers, we conclude that (1 if managers care about environmental sustainability, fierce price competition is not a suggested strategy; (2 if managers care about economic sustainability, fierce price competition is an advantageous strategy. We also find that high demand uncertainty results in a less sustainable supply chain, in both an environmental and economic sustainability sense.
Principle of maximum entropy for reliability analysis in the design of machine components
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Yamaoki, Rumi; Kimura, Shojiro; Ohta, Masatoshi
2010-01-01
Electron spin resonance (ESR) spectral characterizations of gingers irradiated with electron beam were studied. Complex asymmetrical spectra (near g=2.005) with major spectral components (line width=2.4 mT) and minor signals (at 6 mT apart) were observed in irradiated gingers. The spectral intensity decreased considerably 30 days after irradiation, and continued to decrease steadily thereafter. The spectra simulated on the basis of characteristics of free radical components derived from carbohydrates in gingers are in good agreement with the observed spectra. Analysis showed that shortly after irradiation the major radical components of gingers were composed of radical species derived from amylose and cellulose, and the amylose radicals subsequently decreased considerably. At 30 days after irradiation, the major radical components of gingers were composed of radical species derived from cellulose, glucose, fructose or sucrose.
Principal component analysis for neural electron/jet discrimination in highly segmented calorimeters
Vassali, M.R.; Seixas, J.M.
2001-01-01
A neural electron/jet discriminator based on calorimetry is developed for the second-level trigger system of the ATLAS detector. As preprocessing of the calorimeter information, a principal component analysis is performed on each segment of the two sections (electromagnetic and hadronic) of the calorimeter system, in order to reduce significantly the dimension of the input data space and fully explore the detailed energy deposition profile, which is provided by the highly-segmented calorimeter system. It is shown that projecting calorimeter data onto 33 segmented principal components, the discrimination efficiency of the neural classifier reaches 98.9% for electrons (with only 1% of false alarm probability). Furthermore, restricting data projection onto only 9 components, an electron efficiency of 99.1% is achieved (with 3% of false alarm), which confirms that a fast triggering system may be designed using few components
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
On the structure of dynamic principal component analysis used in statistical process monitoring
Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne
2017-01-01
When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...
State of the art seismic analysis for CANDU reactor structure components using condensation method
Soliman, S A; Ibraham, A M; Hodgson, S [Atomic Energy of Canada Ltd., Saskatoon, SK (Canada)
1996-12-31
The reactor structure assembly seismic analysis is a relatively complex process because of the intricate geometry with many different discontinuities, and due to the hydraulic attached mass which follows the structure during its vibration. In order to simulate reasonably accurate behaviour of the reactor structure assembly, detailed finite element models are generated and used for both modal and stress analysis. Guyan reduction condensation method was used in the analysis. The attached mass, which includes the fluid mass contained in the components plus the added mass which accounts for the inertia of the surrounding fluid entrained by the accelerating structure immersed in the fluid, was calculated and attached to the vibrating structures. The masses of the attached components, supported partly or totally by the assembly which includes piping, reactivity control units, end fittings, etc. are also considered in the analysis. (author). 4 refs., 6 tabs., 4 figs.
Implementation of the structural integrity analysis for PWR primary components and piping
Pellissier-Tanon, A.
1982-01-01
The trends on the definition, the assessment and the application of fracture strength evaluation methodology, which have arisen through experience in the design, construction and operation of French 900-MW plants are reviewed. The main features of the methodology proposed in a draft of Appendix ZG of the RCC-M code of practice for the design verification of fracture strength of primary components are presented. The research programs are surveyed and discussed from four viewpoints, first implementation of the LEFM analysis, secondly implementation of the fatigue crack propagation analysis, thirdly analysis of vessel integrity during emergency core cooling, and fourthly methodology for tear fracture analysis. (author)
2002-06-01
This report is a summary of the work performed under a co-ordinated research project (CRP) entitled Verification of Analysis Methods for Predicting the Behaviour of Seismically isolated Nuclear Structures. The project was organized by the IAEA on the recommendation of the IAEA's Technical Working Group on Fast Reactors (TWGFR) and carried out from 1996 to 1999. One of the primary requirements for nuclear power plants and facilities is to ensure safety and the absence of damage under strong external dynamic loading from, for example, earthquakes. The designs of liquid metal cooled fast reactors (LMFRs) include systems which operate at low pressure and include components which are thin-walled and flexible. These systems and components could be considerably affected by earthquakes in seismic zones. Therefore, the IAEA through its advanced reactor technology development programme supports the activities of Member States to apply seismic isolation technology to LMFRs. The application of this technology to LMFRs and other nuclear plants and related facilities would offer the advantage that standard designs may be safely used in areas with a seismic risk. The technology may also provide a means of seismically upgrading nuclear facilities. Design analyses applied to such critical structures need to be firmly established, and the CRP provided a valuable tool in assessing their reliability. Ten organizations from India, Italy, Japan, the Republic of Korea, the Russian Federation, the United Kingdom, the United States of America and the European Commission co-operated in this CRP. This report documents the CRP activities, provides the main results and recommendations and includes the work carried out by the research groups at the participating institutes within the CRP on verification of their analysis methods for predicting the behaviour of seismically isolated nuclear structures
Van Mechelen, Iven; Kiers, Henk A.L.
1999-01-01
The three-mode component analysis model is discussed as a tool for a contextualized study of personality. When applied to person x situation x response data, the model includes sets of latent dimensions for persons, situations, and responses as well as a so-called core array, which may be considered
2013-06-01
zarzoso/ biblio /tnn10.pdf"> % "Robust independent component analysis by iterative maximization</a> % <a href = "http://www.i3s.unice.fr/~zarzoso... biblio /tnn10.pdf"> % of the kurtosis contrast with algebraic optimal step size"</a>, % IEEE Transactions on Neural Networks, vol. 21, no. 2, % pp
Harmonic Stability Analysis of Offshore Wind Farm with Component Connection Method
Hou, Peng; Ebrahimzadeh, Esmaeil; Wang, Xiongfei
2017-01-01
In this paper, an eigenvalue-based harmonic stability analysis method for offshore wind farm is proposed. Considering the internal cable connection layout, a component connection method (CCM) is adopted to divide the system into individual blocks as current controller of converters, LCL filters...
Hendrix, Dean
2010-01-01
This study analyzed 2005-2006 Web of Science bibliometric data from institutions belonging to the Association of Research Libraries (ARL) and corresponding ARL statistics to find any associations between indicators from the two data sets. Principal components analysis on 36 variables from 103 universities revealed obvious associations between…
Analysis of diffusivity of the oscillating reaction components in a microreactor system
Martina Šafranko
2017-01-01
Full Text Available When performing oscillating reactions, periodical changes in the concentrations of reactants, intermediaries, and products take place. Due to the mentioned periodical changes of the concentrations, the information about the diffusivity of the components included into oscillating reactions is very important for the control of the oscillating reactions. Non-linear dynamics makes oscillating reactions very interesting for analysis in different reactor systems. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor, with the aim of identifying the limiting component. The geometry of the microreactor microchannel and a well defined flow profile ensure optimal conditions for the diffusion phenomena analysis, because diffusion profiles in a microreactor depend only on the residence time. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor equipped with 2 Y-shape inlets and 2 Y-shape outlets, with active volume of V = 4 μL at different residence times.
Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G
2017-03-01
Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analysis of Thermo-Mechanical Distortions in Sliding Components : An ALE Approach
Owczarek, P.; Geijselaers, H.J.M.
2008-01-01
A numerical technique for analysis of heat transfer and thermal distortion in reciprocating sliding components is proposed. In this paper we utilize the Arbitrary Lagrangian Eulerian (ALE) description where the mesh displacement can be controlled independently from the material displacement. A