WorldWideScience

Sample records for component analysis technique

  1. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  2. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)

    1997-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  3. Nuclear analysis techniques as a component of thermoluminescence dating

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)

    1996-12-31

    In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.

  4. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    Energy Technology Data Exchange (ETDEWEB)

    Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  5. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  6. Efficacy of the Principal Components Analysis Techniques Using ...

    African Journals Online (AJOL)

    Second, the paper reports results of principal components analysis after the artificial data were submitted to three commonly used procedures; scree plot, Kaiser rule, and modified Horn's parallel analysis, and demonstrate the pedagogical utility of using artificial data in teaching advanced quantitative concepts. The results ...

  7. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    Science.gov (United States)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i

  8. The application of principal component analysis to quantify technique in sports.

    Science.gov (United States)

    Federolf, P; Reid, R; Gilgien, M; Haugen, P; Smith, G

    2014-06-01

    Analyzing an athlete's "technique," sport scientists often focus on preselected variables that quantify important aspects of movement. In contrast, coaches and practitioners typically describe movements in terms of basic postures and movement components using subjective and qualitative features. A challenge for sport scientists is finding an appropriate quantitative methodology that incorporates the holistic perspective of human observers. Using alpine ski racing as an example, this study explores principal component analysis (PCA) as a mathematical method to decompose a complex movement pattern into its main movement components. Ski racing movements were recorded by determining the three-dimensional coordinates of 26 points on each skier which were subsequently interpreted as a 78-dimensional posture vector at each time point. PCA was then used to determine the mean posture and principal movements (PMk ) carried out by the athletes. The first four PMk contained 95.5 ± 0.5% of the variance in the posture vectors which quantified changes in body inclination, vertical or fore-aft movement of the trunk, and distance between skis. In summary, calculating PMk offered a data-driven, quantitative, and objective method of analyzing human movement that is similar to how human observers such as coaches or ski instructors would describe the movement. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. A Biometric Face Recognition System Using an Algorithm Based on the Principal Component Analysis Technique

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-06-01

    Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.

  10. Principal Components as a Data Reduction and Noise Reduction Technique

    Science.gov (United States)

    Imhoff, M. L.; Campbell, W. J.

    1982-01-01

    The potential of principal components as a pipeline data reduction technique for thematic mapper data was assessed and principal components analysis and its transformation as a noise reduction technique was examined. Two primary factors were considered: (1) how might data reduction and noise reduction using the principal components transformation affect the extraction of accurate spectral classifications; and (2) what are the real savings in terms of computer processing and storage costs of using reduced data over the full 7-band TM complement. An area in central Pennsylvania was chosen for a study area. The image data for the project were collected using the Earth Resources Laboratory's thematic mapper simulator (TMS) instrument.

  11. Determination of arterial input function in dynamic susceptibility contrast MRI using group independent component analysis technique

    International Nuclear Information System (INIS)

    Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.

    2006-01-01

    Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study

  12. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    Science.gov (United States)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  13. Handbook of microwave component measurements with advanced VNA techniques

    CERN Document Server

    Dunsmore, Joel P

    2012-01-01

    This book provides state-of-the-art coverage for making measurements on RF and Microwave Components, both active and passive. A perfect reference for R&D and Test Engineers, with topics ranging from the best practices for basic measurements, to an in-depth analysis of errors, correction methods, and uncertainty analysis, this book provides everything you need to understand microwave measurements. With primary focus on active and passive measurements using a Vector Network Analyzer, these techniques and analysis are equally applicable to measurements made with Spectrum Analyzers or Noise Figure

  14. Principal Component Analysis: Most Favourite Tool in Chemometrics

    Indian Academy of Sciences (India)

    Abstract. Principal component analysis (PCA) is the most commonlyused chemometric technique. It is an unsupervised patternrecognition technique. PCA has found applications in chemistry,biology, medicine and economics. The present work attemptsto understand how PCA work and how can we interpretits results.

  15. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  16. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  17. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  18. Probabilistic techniques using Monte Carlo sampling for multi- component system diagnostics

    International Nuclear Information System (INIS)

    Aumeier, S.E.; Lee, J.C.; Akcasu, A.Z.

    1995-01-01

    We outline the structure of a new approach at multi-component system fault diagnostics which utilizes detailed system simulation models, uncertain system observation data, statistical knowledge of system parameters, expert opinion, and component reliability data in an effort to identify incipient component performance degradations of arbitrary number and magnitude. The technique involves the use of multiple adaptive Kalman filters for fault estimation, the results of which are screened using standard hypothesis testing procedures to define a set of component events that could have transpired. Latin Hypercube sample each of these feasible component events in terms of uncertain component reliability data and filter estimates. The capabilities of the procedure are demonstrated through the analysis of a simulated small magnitude binary component fault in a boiling water reactor balance of plant. The results show that the procedure has the potential to be a very effective tool for incipient component fault diagnosis

  19. Passive RF component technology materials, techniques, and applications

    CERN Document Server

    Wang, Guoan

    2012-01-01

    Focusing on novel materials and techniques, this pioneering volume provides you with a solid understanding of the design and fabrication of smart RF passive components. You find comprehensive details on LCP, metal materials, ferrite materials, nano materials, high aspect ratio enabled materials, green materials for RFID, and silicon micromachining techniques. Moreover, this practical book offers expert guidance on how to apply these materials and techniques to design a wide range of cutting-edge RF passive components, from MEMS switch based tunable passives and 3D passives, to metamaterial-bas

  20. Dynamic Modal Analysis of Vertical Machining Centre Components

    OpenAIRE

    Anayet U. Patwari; Waleed F. Faris; A. K. M. Nurul Amin; S. K. Loh

    2009-01-01

    The paper presents a systematic procedure and details of the use of experimental and analytical modal analysis technique for structural dynamic evaluation processes of a vertical machining centre. The main results deal with assessment of the mode shape of the different components of the vertical machining centre. The simplified experimental modal analysis of different components of milling machine was carried out. This model of the different machine tool's structure is made by design software...

  1. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection

    DEFF Research Database (Denmark)

    Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet

    2016-01-01

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...

  2. Evaluation of flow-induced vibration prediction techniques for in-reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.; Turula, P.

    1975-05-01

    Selected in-reactor components of a hydraulic and structural dynamic scale model of the U. S. Energy Research and Development Administration experimental Fast Test Reactor have been studied in an effort to develop and evaluate techniques for predicting vibration behavior of elastic structures exposed to a moving fluid. Existing analysis methods are used to compute the natural frequencies and modal shapes of submerged beam and shell type components. Component response is calculated, assuming as fluid forcing mechanisms both vortex shedding and random excitations characterized by the available hydraulic data. The free and force vibration response predictions are compared with extensive model flow and shaker test data. (U.S.)

  3. Classification Technique for Ultrasonic Weld Inspection Signals using a Neural Network based on 2-dimensional fourier Transform and Principle Component Analysis

    International Nuclear Information System (INIS)

    Kim, Jae Joon

    2004-01-01

    Neural network-based signal classification systems are increasingly used in the analysis of large volumes of data obtained in NDE applications. Ultrasonic inspection methods on the other hand are commonly used in the nondestructive evaluation of welds to detect flaws. An important characteristic of ultrasonic inspection is the ability to identify the type of discontinuity that gives rise to a peculiar signal. Standard techniques rely on differences in individual A-scans to classify the signals. This paper proposes an ultrasonic signal classification technique based on the information tying in the neighboring signals. The approach is based on a 2-dimensional Fourier transform and the principal component analysis to generate a reduced dimensional feature vector for classification. Results of applying the technique to data obtained from the inspection of actual steel welds are presented

  4. Application of principal component analysis and information fusion technique to detect hotspots in NOAA/AVHRR images of Jharia coalfield, India - article no. 013523

    Energy Technology Data Exchange (ETDEWEB)

    Gautam, R.S.; Singh, D.; Mittal, A. [Indian Institute of Technology Roorkee, Roorkee (India)

    2007-07-01

    Present paper proposes an algorithm for hotspot (sub-surface fire) detection in NOAA/AVHRR images in Jharia region of India by employing Principal Component Analysis (PCA) and fusion technique. Proposed technique is very simple to implement and is more adaptive in comparison to thresholding, multi-thresholding and contextual algorithms. The algorithm takes into account the information of AVHRR channels 1, 2, 3, 4 and vegetation indices NDVI and MSAVI for the required purpose. Proposed technique consists of three steps: (1) detection and removal of cloud and water pixels from preprocessed AVHRR image and screening out the noise of channel 3, (2) application of PCA on multi-channel information along with vegetation index information of NOAA/AVHRR image to obtain principal components, and (3) fusion of information obtained from principal component 1 and 2 to classify image pixels as hotspots. Image processing techniques are applied to fuse information in first two principal component images and no absolute threshold is incorporated to specify whether particular pixel belongs to hotspot class or not, hence, proposed method is adaptive in nature and works successfully for most of the AVHRR images with average 87.27% detection accuracy and 0.201% false alarm rate while comparing with ground truth points in Jharia region of India.

  5. Tomato sorting using independent component analysis on spectral images

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Young, I.T.

    2003-01-01

    Independent Component Analysis is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the most important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  6. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  7. Analysis and Comparison of the Antioxidant Component of Portulaca Oleracea Leaves Obtained by Different Solid-Liquid Extraction Techniques

    Science.gov (United States)

    Conte, Esterina

    2017-01-01

    Portulaca oleracea is a wild plant pest of orchards and gardens, but is also an edible vegetable rich in beneficial nutrients. It possesses many antioxidant properties due to the high content of vitamins, minerals, omega-3 essential fatty acids and other healthful compounds; therefore, the intake of purslane and/or its bioactive compounds could help to improve the health and function of the whole human organism. Accordingly, in this work it was analyzed and compared to the extractive capacity of the antioxidant component of purslane leaves obtained by solid-liquid extraction techniques such as: hot-maceration, maceration with ultrasound, rapid solid-liquid dynamic extraction using the Naviglio extractor, and a combination of two techniques (mix extraction). The chromatographic analysis by High Performance Liquid Chromatography (HPLC) of the methanolic extract of dried purslane leaves allowed the identification of various polyphenolic compounds for comparison with the standards. In addition, the properties of the different extracts were calculated on dry matter and the antioxidant properties of the total polyphenol components analyzed by the DPPH (2,2-diphenyl-1-picrylhydrazyl) assay. The results showed that mix extraction was the most efficient compared to other techniques. In fact, it obtained a quantity of polyphenols amounting to 237.8 mg Gallic Acid Equivalents (GAE)/100 g of fresh weight, while in other techniques, the range varied from 60–160 mg GAE/100 g fresh weight. In addition, a qualitative analysis by Liquid Chromatography-Tandem Mass Spectrometry (LC/MS/MS) of the phenolic compounds present in the purslane leaves examined was carried out. The compounds were identified by comparison of their molecular weight, fragmentation pattern and retention time with those of standards, using the “Multiple Reaction Monitoring” mode (MRM). Therefore, this study allowed the re-evaluation of a little-known plant that possesses as its beneficial properties, a

  8. Setting Component Priorities in Protecting NPPs against Cyber-Attacks Using Reliability Analysis Techniques

    International Nuclear Information System (INIS)

    Choi, Moon Kyoung; Seong, Poong Hyun; Son, Han Seong

    2017-01-01

    The digitalization of infrastructure makes systems vulnerable to cyber threats and hybrid attacks. According to ICS-CERT report, as time goes by, the number of vulnerabilities in ICS industries increases rapidly. Digital I and C systems have been developed and installed in nuclear power plants, and due to installation of the digital I and C systems, cyber security concerns are increasing in nuclear industry. However, there are too many critical digital assets to be inspected in digitalized NPPs. In order to reduce the inefficiency of regulation in nuclear facilities, the critical components that are directly related to an accident are elicited by using the reliability analysis techniques. Target initial events are selected, and their headings are analyzed through event tree analysis about whether the headings can be affected by cyber-attacks or not. Among the headings, the headings that can be proceeded directly to the core damage by the cyber-attack when they are fail are finally selected as the target of deriving the minimum cut-sets. We analyze the fault trees and derive the minimum set-cuts. In terms of original PSA, the value of probability for the cut-sets is important but the probability is not important in terms of cyber security of NPPs. The important factors is the number of basic events consisting of the minimal cut-sets that is proportional to vulnerability.

  9. Oil classification using X-ray scattering and principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T., E-mail: dani.almeida84@gmail.com, E-mail: ricardo@lin.ufrj.br, E-mail: amandass@bioqmed.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil); Oliveira, Davi F.; Anjos, Marcelino J., E-mail: davi.oliveira@uerj.br, E-mail: marcelin@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Inst. de Fisica Armando Dias Tavares

    2015-07-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  10. Oil classification using X-ray scattering and principal component analysis

    International Nuclear Information System (INIS)

    Almeida, Danielle S.; Souza, Amanda S.; Lopes, Ricardo T.; Oliveira, Davi F.; Anjos, Marcelino J.

    2015-01-01

    X-ray scattering techniques have been considered promising for the classification and characterization of many types of samples. This study employed this technique combined with chemical analysis and multivariate analysis to characterize 54 vegetable oil samples (being 25 olive oils)with different properties obtained in commercial establishments in Rio de Janeiro city. The samples were chemically analyzed using the following indexes: iodine, acidity, saponification and peroxide. In order to obtain the X-ray scattering spectrum, an X-ray tube with a silver anode operating at 40kV and 50 μA was used. The results showed that oils cab ne divided in tow large groups: olive oils and non-olive oils. Additionally, in a multivariate analysis (Principal Component Analysis - PCA), two components were obtained and accounted for more than 80% of the variance. One component was associated with chemical parameters and the other with scattering profiles of each sample. Results showed that use of X-ray scattering spectra combined with chemical analysis and PCA can be a fast, cheap and efficient method for vegetable oil characterization. (author)

  11. INTERNAL ENVIRONMENT ANALYSIS TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Caescu Stefan Claudiu

    2011-12-01

    Full Text Available Theme The situation analysis, as a separate component of the strategic planning, involves collecting and analysing relevant types of information on the components of the marketing environment and their evolution on the one hand and also on the organization’s resources and capabilities on the other. Objectives of the Research The main purpose of the study of the analysis techniques of the internal environment is to provide insight on those aspects that are of strategic importance to the organization. Literature Review The marketing environment consists of two distinct components, the internal environment that is made from specific variables within the organization and the external environment that is made from variables external to the organization. Although analysing the external environment is essential for corporate success, it is not enough unless it is backed by a detailed analysis of the internal environment of the organization. The internal environment includes all elements that are endogenous to the organization, which are influenced to a great extent and totally controlled by it. The study of the internal environment must answer all resource related questions, solve all resource management issues and represents the first step in drawing up the marketing strategy. Research Methodology The present paper accomplished a documentary study of the main techniques used for the analysis of the internal environment. Results The special literature emphasizes that the differences in performance from one organization to another is primarily dependant not on the differences between the fields of activity, but especially on the differences between the resources and capabilities and the ways these are capitalized on. The main methods of analysing the internal environment addressed in this paper are: the analysis of the organizational resources, the performance analysis, the value chain analysis and the functional analysis. Implications Basically such

  12. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  13. Estimation of Th, Cs, Sr, I, Co, Fe, Zn, Ca and K in major food components using neutron activation analysis technique

    International Nuclear Information System (INIS)

    Nair, Suma; Bhati, Sharda

    2010-01-01

    The concentration of some radiologically and nutritionally important trace elements: Th, Cs, Sr, I, Co, Fe, Zn, Ca and K were determined in major food components such as cereals, pulses, vegetables, fruits, milk etc. The trace elements in food samples were determined using neutron activation analysis technique which involves instrumental and radiochemical neutron activation analysis. Whereas, the trace elements Th, Cs, K and Sr, are important in radiation protection; Fe and Zn are of importance in nutrition studies and Ca and I have dual importance, both in nutrition and radiation protection. The results of analysis show that among the food materials, higher concentrations of Th, Cs, Sr, K, Fe, Zn and Co were found in cereals and pulses. In case of Ca, the milk appears to be the main contributor towards its dietary intake. The estimated concentrations of the trace elements in food components can be employed in determining the daily dietary intake of these elements which in turn can be used for their biokinetic studies. (author)

  14. Techniques for preventing damage to high power laser components

    International Nuclear Information System (INIS)

    Stowers, I.F.; Patton, H.G.; Jones, W.A.; Wentworth, D.E.

    1977-09-01

    Techniques for preventing damage to components of the LASL Shiva high power laser system were briefly presented. Optical element damage in the disk amplifier from the combined fluence of the primary laser beam and the Xenon flash lamps that pump the cavity was discussed. Assembly and cleaning techniques were described which have improved optical element life by minimizing particulate and optically absorbing film contamination on assembled amplifier structures. A Class-100 vertical flaw clean room used for assembly and inspection of laser components was also described. The life of a disk amplifier was extended from less than 50 shots to 500 shots through application of these assembly and cleaning techniques

  15. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  16. Application of lock-in thermography non destructive technique to CFC armoured plasma facing components

    International Nuclear Information System (INIS)

    Escourbiac, F.; Constans, S.; Courtois, X.; Durocher, A.

    2007-01-01

    A non destructive testing technique - so called modulated photothermal thermography or lock-in thermography - has been set-up for plasma facing components examination. Reliable measurements of phase contrast were obtained on 8 mm carbon fiber composite (CFC) armoured W7-X divertor component with calibrated flaws. A 3D finite element analysis allowed the correlation of the measured phase contrast and showed that a 4 mm strip flaw can be detected at the CFC/copper interface

  17. Advanced inspection and repair techniques for primary side components

    International Nuclear Information System (INIS)

    Elm, Ralph

    1998-01-01

    The availability of nuclear power plant mainly depends on the components of the Nuclear Steam Supply System (NSSS) such as reactor pressure vessel, core internals and steam generators. The last decade has been characterized by intensive inspection and repair work on PWR steam generators. In the future, it can be expected, that the inspection of the reactor pressure vessel and the inspection and repair of its internals, in both PWR and BWR will be one of the challenges for the nuclear community. Due to this challenge, new, advanced inspection and repair techniques for the vital primary side components have been developed and applied, taking into account such issues as: use of reliable and fast inspection methods, repair of affected components instead of costly replacement, reduction of outage time compared to conventional methods, minimized radiation exposure, acceptable costs. This paper reflects on advanced inspection and repair techniques such as: Baffle Former Bolt inspection and replacement, Barrel Former Bolt inspection and replacement, Mechanized UT and visual inspection of reactor pressure vessels, Steam Generator repair by advanced sleeving technology. The techniques described have been successfully applied in nuclear power plants and improved the operation performance of the components and the NPP. (author). 6 figs

  18. Integrative sparse principal component analysis of gene expression data.

    Science.gov (United States)

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  19. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  20. Abstract interfaces for data analysis - component architecture for data analysis tools

    International Nuclear Information System (INIS)

    Barrand, G.; Binko, P.; Doenszelmann, M.; Pfeiffer, A.; Johnson, A.

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis'99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++ (Anaphe/Lizard, OpenScientist) and Java (Java Analysis Studio). A special implementation aims at accessing the Java libraries (through their Abstract Interfaces) from C++. The authors give an overview of the architecture and design of the various components for data analysis as discussed in AIDA

  1. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  2. Applications of Electromigration Techniques: Applications of Electromigration Techniques in Food Analysis

    Science.gov (United States)

    Wieczorek, Piotr; Ligor, Magdalena; Buszewski, Bogusław

    Electromigration techniques, including capillary electrophoresis (CE), are widely used for separation and identification of compounds present in food products. These techniques may also be considered as alternate and complementary with respect to commonly used analytical techniques, such as high-performance liquid chromatography (HPLC), or gas chromatography (GC). Applications of CE concern the determination of high-molecular compounds, like polyphenols, including flavonoids, pigments, vitamins, food additives (preservatives, antioxidants, sweeteners, artificial pigments) are presented. Also, the method developed for the determination of proteins and peptides composed of amino acids, which are basic components of food products, are studied. Other substances such as carbohydrates, nucleic acids, biogenic amines, natural toxins, and other contaminations including pesticides and antibiotics are discussed. The possibility of CE application in food control laboratories, where analysis of the composition of food and food products are conducted, is of great importance. CE technique may be used during the control of technological processes in the food industry and for the identification of numerous compounds present in food. Due to the numerous advantages of the CE technique it is successfully used in routine food analysis.

  3. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  4. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  5. Abstract Interfaces for Data Analysis Component Architecture for Data Analysis Tools

    CERN Document Server

    Barrand, G; Dönszelmann, M; Johnson, A; Pfeiffer, A

    2001-01-01

    The fast turnover of software technologies, in particular in the domain of interactivity (covering user interface and visualisation), makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete. At the HepVis '99 workshop, a working group has been formed to improve the production of software tools for data analysis in HENP. Beside promoting a distributed development organisation, one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques. An initial domain analysis has come up with several categories (components) found in typical data analysis tools: Histograms, Ntuples, Functions, Vectors, Fitter, Plotter, Analyzer and Controller. Special emphasis was put on reducing the couplings between the categories to a minimum, thus optimising re-use and maintainability of any component individually. The interfaces have been defined in Java and C++ and i...

  6. A method for independent component graph analysis of resting-state fMRI

    DEFF Research Database (Denmark)

    de Paula, Demetrius Ribeiro; Ziegler, Erik; Abeyasinghe, Pubuditha M.

    2017-01-01

    Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguou......Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non......-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Objective Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. Methods First, ICA was performed at the single-subject level in 15 healthy...... parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Results Network graph comparison between the classically constructed network...

  7. Enantiomer-specific analysis of multi-component mixtures by correlated electron imaging-ion mass spectrometry

    NARCIS (Netherlands)

    Rafiee Fanood, M.M.; Ram, N.B.; Lehmann, C.S.; Powis, I.; Janssen, M.H.M.

    2015-01-01

    Simultaneous, enantiomer-specific identification of chiral molecules in multi-component mixtures is extremely challenging. Many established techniques for single-component analysis fail to provide selectivity in multi-component mixtures and lack sensitivity for dilute samples. Here we show how

  8. A further component analysis for illicit drugs mixtures with THz-TDS

    Science.gov (United States)

    Xiong, Wei; Shen, Jingling; He, Ting; Pan, Rui

    2009-07-01

    A new method for quantitative analysis of mixtures of illicit drugs with THz time domain spectroscopy was proposed and verified experimentally. In traditional method we need fingerprints of all the pure chemical components. In practical as only the objective components in a mixture and their absorption features are known, it is necessary and important to present a more practical technique for the detection and identification. Our new method of quantitatively inspect of the mixtures of illicit drugs is developed by using derivative spectrum. In this method, the ratio of objective components in a mixture can be obtained on the assumption that all objective components in the mixture and their absorption features are known but the unknown components are not needed. Then methamphetamine and flour, a illicit drug and a common adulterant, were selected for our experiment. The experimental result verified the effectiveness of the method, which suggested that it could be an effective method for quantitative identification of illicit drugs. This THz spectroscopy technique is great significant in the real-world applications of illicit drugs quantitative analysis. It could be an effective method in the field of security and pharmaceuticals inspection.

  9. Characterization of decommissioned reactor internals: Monte Carlo analysis technique

    International Nuclear Information System (INIS)

    Reid, B.D.; Love, E.F.; Luksic, A.T.

    1993-03-01

    This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty

  10. Multi-component separation and analysis of bat echolocation calls.

    Science.gov (United States)

    DiCecco, John; Gaudette, Jason E; Simmons, James A

    2013-01-01

    The vast majority of animal vocalizations contain multiple frequency modulated (FM) components with varying amounts of non-linear modulation and harmonic instability. This is especially true of biosonar sounds where precise time-frequency templates are essential for neural information processing of echoes. Understanding the dynamic waveform design by bats and other echolocating animals may help to improve the efficacy of man-made sonar through biomimetic design. Bats are known to adapt their call structure based on the echolocation task, proximity to nearby objects, and density of acoustic clutter. To interpret the significance of these changes, a method was developed for component separation and analysis of biosonar waveforms. Techniques for imaging in the time-frequency plane are typically limited due to the uncertainty principle and interference cross terms. This problem is addressed by extending the use of the fractional Fourier transform to isolate each non-linear component for separate analysis. Once separated, empirical mode decomposition can be used to further examine each component. The Hilbert transform may then successfully extract detailed time-frequency information from each isolated component. This multi-component analysis method is applied to the sonar signals of four species of bats recorded in-flight by radiotelemetry along with a comparison of other common time-frequency representations.

  11. Boat sampling technique for assessment of ageing of components

    International Nuclear Information System (INIS)

    Kumar, Kundan; Shyam, T.V.; Kayal, J.N.; Rupani, B.B.

    2006-01-01

    Boat sampling technique (BST) is a surface sampling technique, which has been developed for obtaining, in-situ, metal samples from the surface of an operating component without affecting its operating service life. The BST is non-destructive in nature and the sample is obtained without plastic deformation or without thermal degradation of the parent material. The shape and size of the sample depends upon the shape of the cutter and the surface geometry of the parent material. Miniature test specimens are generated from the sample and the specimens are subjected to various tests, viz. Metallurgical Evaluation, Metallographic Evaluation, Micro-hardness Evaluation, sensitisation test, small punch test etc. to confirm the integrity and assessment of safe operating life of the component. This paper highlights design objective of boat sampling technique, description of sampling module, sampling cutter and its performance evaluation, cutting process, boat samples, operational sequence of sampling module, qualification of sampling module, qualification of sampling technique, qualification of scooped region of the parent material, sample retrieval system, inspection, testing and examination to be carried out on the boat samples and scooped region. (author)

  12. ASSESSMENT OF DYNAMIC PRA TECHNIQUES WITH INDUSTRY AVERAGE COMPONENT PERFORMANCE DATA

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, Vaibhav; Agarwal, Vivek; Gribok, Andrei V.; Smith, Curtis L.

    2017-06-01

    In the nuclear industry, risk monitors are intended to provide a point-in-time estimate of the system risk given the current plant configuration. Current risk monitors are limited in that they do not properly take into account the deteriorating states of plant equipment, which are unit-specific. Current approaches to computing risk monitors use probabilistic risk assessment (PRA) techniques, but the assessment is typically a snapshot in time. Living PRA models attempt to address limitations of traditional PRA models in a limited sense by including temporary changes in plant and system configurations. However, information on plant component health are not considered. This often leaves risk monitors using living PRA models incapable of conducting evaluations with dynamic degradation scenarios evolving over time. There is a need to develop enabling approaches to solidify risk monitors to provide time and condition-dependent risk by integrating traditional PRA models with condition monitoring and prognostic techniques. This paper presents estimation of system risk evolution over time by integrating plant risk monitoring data with dynamic PRA methods incorporating aging and degradation. Several online, non-destructive approaches have been developed for diagnosing plant component conditions in nuclear industry, i.e., condition indication index, using vibration analysis, current signatures, and operational history [1]. In this work the component performance measures at U.S. commercial nuclear power plants (NPP) [2] are incorporated within the various dynamic PRA methodologies [3] to provide better estimates of probability of failures. Aging and degradation is modeled within the Level-1 PRA framework and is applied to several failure modes of pumps and can be extended to a range of components, viz. valves, generators, batteries, and pipes.

  13. A numerical technique for reactor subchannel analysis

    International Nuclear Information System (INIS)

    Fath, Hassan E.S.

    1983-01-01

    A numerical technique is developed for the solution of the transient boundary layer equations with a moving liquid-vapour interface boundary. The technique uses the finite difference method with the velocity components defined over an Eulerian mesh. A system of interface massless markers is defined where the markers move with the flow field according to a simple kinematic relation between the interface geometry and the fluid velocity. Different applications of nuclear engineering interest are reported with some available results. The present technique is capable of predicting the interface profile near the wall which is important in the reactor subchannel analysis

  14. Towards Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Cognitive component analysis (COCA) is here defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. We have earlier demonstrated that independent components analysis is relevant for representing...

  15. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting model equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method. The vectors of maximum system displacements as computed by the time history analysis and the five response spectrum analysis methods are presented. (Auth.)

  16. COMPARING INDEPENDENT COMPONENT ANALYSIS WITH PRINCIPLE COMPONENT ANALYSIS IN DETECTING ALTERATIONS OF PORPHYRY COPPER DEPOSIT (CASE STUDY: ARDESTAN AREA, CENTRAL IRAN

    Directory of Open Access Journals (Sweden)

    S. Mahmoudishadi

    2017-09-01

    Full Text Available The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA and Independent Component Analysis (ICA has been evaluated for the visible and near-infrared (VNIR and Shortwave infrared (SWIR subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6 were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  17. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    Science.gov (United States)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  18. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    Science.gov (United States)

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. CATEGORICAL IMAGE COMPONENTS IN THE FORMING SYSTEM OF A MARKETING TECHNIQUES MANAGER’S IMAGE CULTURE

    Directory of Open Access Journals (Sweden)

    Anna Borisovna Cherednyakova

    2015-08-01

    Full Text Available Based on the understanding of the image culture formation of managers of marketing techniques, as a representative of the social and communication interaction of public structures, categorical apparatus of image culture with an emphasis on the etymology of the image, as an integral component of image culture was analyzed. Categorical components of the image are presented from the standpoint of image culture, as personal new formation, an integral part of the professional activity of the marketing techniques manager: object-communicative categorical component, subject-activity categorical component of image, personality-oriented categorical component, value-acmeological categorical component of image.The aim is to identify and justify the image categorical components as a component of image culture of the marketing techniques manager.Method and methodology of work – a general scientific research approach reflecting scientific apparatus of research.Results. Categorical components of the image, as an image culture component of manager of marketing techniques were defined.Practical implication of the results. The theoretical part of «Imageology» course, special course «Image culture of manager of marketing techniques», the theoretical and methodological study and the formation of image culture.

  20. Applications Of Binary Image Analysis Techniques

    Science.gov (United States)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  1. Estimation of compound distribution in spectral images of tomatoes using independent component analysis

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.

    2003-01-01

    Independent Component Analysis (ICA) is one of the most widely used methods for blind source separation. In this paper we use this technique to estimate the important compounds which play a role in the ripening of tomatoes. Spectral images of tomatoes were analyzed. Two main independent components

  2. ANOVA-principal component analysis and ANOVA-simultaneous component analysis: a comparison.

    NARCIS (Netherlands)

    Zwanenburg, G.; Hoefsloot, H.C.J.; Westerhuis, J.A.; Jansen, J.J.; Smilde, A.K.

    2011-01-01

    ANOVA-simultaneous component analysis (ASCA) is a recently developed tool to analyze multivariate data. In this paper, we enhance the explorative capability of ASCA by introducing a projection of the observations on the principal component subspace to visualize the variation among the measurements.

  3. Analysis of Thermo-Mechanical Distortions in Sliding Components : An ALE Approach

    NARCIS (Netherlands)

    Owczarek, P.; Geijselaers, H.J.M.

    2008-01-01

    A numerical technique for analysis of heat transfer and thermal distortion in reciprocating sliding components is proposed. In this paper we utilize the Arbitrary Lagrangian Eulerian (ALE) description where the mesh displacement can be controlled independently from the material displacement. A

  4. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  5. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  6. Independent component analysis: recent advances

    OpenAIRE

    Hyv?rinen, Aapo

    2013-01-01

    Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components that are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multi-variate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of independent component analysis was mainly developed in th...

  7. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  8. Portable XRF and principal component analysis for bill characterization in forensic science

    International Nuclear Information System (INIS)

    Appoloni, C.R.; Melquiades, F.L.

    2014-01-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. - Highlights: • The paper is about a direct method for bills discrimination by EDXRF and principal component analysis. • The bills are analyzed directly, without sample preparation and non destructively. • The results demonstrates that the methodology is feasible and could be applied in forensic science for identification of origin and false banknotes. • The novelty is that portable EDXRF is very fast and efficient for bills characterization

  9. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

    OpenAIRE

    Geroukis, Asterios; Brorson, Erik

    2014-01-01

    In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

  10. Time-domain ultra-wideband radar, sensor and components theory, analysis and design

    CERN Document Server

    Nguyen, Cam

    2014-01-01

    This book presents the theory, analysis, and design of ultra-wideband (UWB) radar and sensor systems (in short, UWB systems) and their components. UWB systems find numerous applications in the military, security, civilian, commercial and medicine fields. This book addresses five main topics of UWB systems: System Analysis, Transmitter Design, Receiver Design, Antenna Design and System Integration and Test. The developments of a practical UWB system and its components using microwave integrated circuits, as well as various measurements, are included in detail to demonstrate the theory, analysis and design technique. Essentially, this book will enable the reader to design their own UWB systems and components. In the System Analysis chapter, the UWB principle of operation as well as the power budget analysis and range resolution analysis are presented. In the UWB Transmitter Design chapter, the design, fabrication and measurement of impulse and monocycle pulse generators are covered. The UWB Receiver Design cha...

  11. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  12. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  13. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  14. Statistical techniques applied to aerial radiometric surveys (STAARS): principal components analysis user's manual

    International Nuclear Information System (INIS)

    Koch, C.D.; Pirkle, F.L.; Schmidt, J.S.

    1981-01-01

    A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From this analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained

  15. Independent component analysis: A new possibility for analysing series of electron energy loss spectra

    International Nuclear Information System (INIS)

    Bonnet, Nogl; Nuzillard, Danielle

    2005-01-01

    A complementary approach is proposed for analysing series of electron energy-loss spectra that can be recorded with the spectrum-line technique, across an interface for instance. This approach, called blind source separation (BSS) or independent component analysis (ICA), complements two existing methods: the spatial difference approach and multivariate statistical analysis. The principle of the technique is presented and illustrations are given through one simulated example and one real example

  16. Electromagnetic and ultrasonic techniques to evaluate stress states of components; Elektromagnetische und Ultraschallverfahren zur Spannungsanalyse an Bauteilen

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, E.; Kern, R.; Theiner, W.A. [Fraunhofer Inst. fuer Zerstoerungsfreie Pruefverfahren, IZFP, Saarbruecken (Germany)

    1999-08-01

    The electromagnetic and ultrasonic techniques are comparably recent NDT methods for determination of stress states of components. They are simple in application, but require pre-measurement preparation: Electromagnetic techniques need calibration, and quantitative stress analysis by ultrasonic techniques needs reference values, i.e. verified materials-specific quantities to be obtained with representative specimens. Electromagnetic and ultrasonic techniques have been developed for specific tests at defined components, and the corresponding instruments and sensors have been used in practice for several years now. The paper summarizes fundamental aspects and explains the state of the art by means of several examples. (orig./CB) [Deutsch] Elektromagnetische und Ultraschallverfahren sind vergleichsweise neue zerstoerungsfreie Verfahren zur Bestimmung von Eigenspannungen in Bauteilen. Ihre Anwendung ist einfach, setzt aber Vorarbeiten voraus: Elektromagnetische Verfahren muessen kalibriert und zur quantitativen Spannungsanalyse mittels Ultraschallverfahren muessen materialspezifische Kenngroessen an repraesentativen Materialproben ermittelt werden. Elektromagnetische und Ultraschallverfahren sind fuer konkrete Anwendungen an Bauteilen entwickelt, angepasste Geraete und Sensoren seit Jahren in der Nutzung. Der Beitrag fasst die Grundlagen zusammen und stellt den Stand der Technik anhand ausgewaehlter Anwendungen dar. (orig.)

  17. A comparative study of different techniques in the stress analysis of a nuclear component

    International Nuclear Information System (INIS)

    Dickenson, P.W.; Floyd, C.G.

    1985-01-01

    The inner surface stresses around the corner between the cylindrical wall and end plate of a flat ended pressure vessel have been determined using finite element, boundary element and photoelastic techniques. The results demonstrate severe deficiencies under certain conditions in the performance of the quadrilateral axisymmetric finite element which is commonly used in this type of analysis. The boundary element method is shown to provide an alternative analysis route giving more accurate results. The hybrid formulation finite element is also found to give reasonable results for the analysis of stresses in regions of rapidly varying stress. (orig.)

  18. The derivative assay--an analysis of two fast components of DNA rejoining kinetics

    International Nuclear Information System (INIS)

    Sandstroem, B.E.

    1989-01-01

    The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively

  19. Using Machine Learning Techniques in the Analysis of Oceanographic Data

    Science.gov (United States)

    Falcinelli, K. E.; Abuomar, S.

    2017-12-01

    Acoustic Doppler Current Profilers (ADCPs) are oceanographic tools capable of collecting large amounts of current profile data. Using unsupervised machine learning techniques such as principal component analysis, fuzzy c-means clustering, and self-organizing maps, patterns and trends in an ADCP dataset are found. Cluster validity algorithms such as visual assessment of cluster tendency and clustering index are used to determine the optimal number of clusters in the ADCP dataset. These techniques prove to be useful in analysis of ADCP data and demonstrate potential for future use in other oceanographic applications.

  20. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  1. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  2. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  3. Use of nuclear techniques for coal analysis in exploration, mining and processing

    International Nuclear Information System (INIS)

    Clayton, C.G.; Wormald, M.R.

    1982-01-01

    Nuclear techniques have a long history of application in the coal industry, during exploration and especially during coal preparation, for the measurement of ash content. The preferred techniques are based on X- and gamma-ray scattering and borehole logging, and on-line equipment incorporating these techniques are now in world-wide routine use. However, gamma-ray techniques are mainly restricted to density measurement and X-ray techniques are principally used for ash determinations. They have a limited range and when used on-line some size reduction of the coal is usually required and a full elemental analysis is not possible. In particular, X- and gamma-ray techniques are insensitive to the principal elements in the combustible component and to many of the important elements in the mineral fraction. Neutron techniques on the other hand have a range which is compatible with on-line requirements and all elements in the combustible component and virtually all elements in the mineral component can be observed. A complete elemental analysis of coal then allows the ash content and the calorific value to be determined on-line. This paper surveys the various nuclear techniques now in use and gives particular attention to the present state of development of neutron methods and to their advantages and limitations. Although it is shown that considerable further development and operational experience are still required, equipment now being introduced has a performance which matches many of the identified requirements and an early improvement in specification can be anticipated

  4. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  5. An RFI Detection Algorithm for Microwave Radiometers Using Sparse Component Analysis

    Science.gov (United States)

    Mohammed-Tano, Priscilla N.; Korde-Patel, Asmita; Gholian, Armen; Piepmeier, Jeffrey R.; Schoenwald, Adam; Bradley, Damon

    2017-01-01

    Radio Frequency Interference (RFI) is a threat to passive microwave measurements and if undetected, can corrupt science retrievals. The sparse component analysis (SCA) for blind source separation has been investigated to detect RFI in microwave radiometer data. Various techniques using SCA have been simulated to determine detection performance with continuous wave (CW) RFI.

  6. Efficient training of multilayer perceptrons using principal component analysis

    International Nuclear Information System (INIS)

    Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

    2005-01-01

    A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

  7. Ultrasonic techniques for quality assessment of ITER Divertor plasma facing component

    International Nuclear Information System (INIS)

    Martinez-Ona, Rafael; Garcia, Monica; Medrano, Mercedes

    2009-01-01

    The divertor is one of the most challenging components of ITER machine. Its plasma facing components contain thousands of joints that should be assessed to demonstrate their integrity during the required lifetime. Ultrasonic (US) techniques have been developed to study the capability of defect detection and to control the quality and degradation of these interfaces after the manufacturing process. Three types of joints made of carbon fibre composite to copper alloy, tungsten to copper alloy, and copper-to-copper alloy with two types of configurations have been studied. More than 100 samples representing these configurations and containing implanted flaws of different sizes have been examined. US techniques developed are detailed and results of validation samples examination before and after high heat flux (HHF) tests are presented. The results show that for W monoblocks the US technique is able to detect, locate and size the degradations in the two sample joints; for CFC monoblocks, the US technique is also able to detect, locate and size the calibrated defects in the two joints before the HHF, however after the HHF test the technique is not able to reliably detect defects in the CFC/Cu joint; finally, for the W flat tiles the US technique is able to detect, locate and size the calibrated defects in the two joints before HHF test, nevertheless defect location and sizing are more difficult after the HHF test.

  8. Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...

  9. Principal component analysis of NEXAFS spectra for molybdenum speciation in hydrotreating catalysts

    International Nuclear Information System (INIS)

    Faro Junior, Arnaldo da C.; Rodrigues, Victor de O.; Eon, Jean-G.; Rocha, Angela S.

    2010-01-01

    Bulk and supported molybdenum based catalysts, modified by nickel, phosphorous or tungsten were studied by NEXAFS spectroscopy at the Mo L III and L II edges. The techniques of principal component analysis (PCA) together with a linear combination analysis (LCA) allowed the detection and quantification of molybdenum atoms in two different coordination states in the oxide form of the catalysts, namely tetrahedral and octahedral coordination. (author)

  10. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  11. Use of Atomic and Nuclear Techniques in Elemental and Isotopic Analysis

    International Nuclear Information System (INIS)

    2008-01-01

    This book is divided into four chapters which were presented by six authors of the best Arab specialists who have used the atomic and nuclear techniques for a long time and recognized their importance and capabilities in scientific researches. Atomic and Nuclear techniques are very successful in the field of analysis because they are the only way to proceed the analysis process with the requested accuracy and they are the cheapest at the same time. A number of these techniques were collected in this book on the basis of their accuracy and the abundance of using them in the analysis of material components, specially when these elements exist with insignificant percentage as in the case of poisons science, archaeology, nutrition, medicine and other applications.

  12. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    Science.gov (United States)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  13. PEMBUATAN PERANGKAT LUNAK PENGENALAN WAJAH MENGGUNAKAN PRINCIPAL COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Face recognition is one of many important researches, and today, many applications have implemented it. Through development of techniques like Principal Components Analysis (PCA, computers can now outperform human in many face recognition tasks, particularly those in which large database of faces must be searched. Principal Components Analysis was used to reduce facial image dimension into fewer variables, which are easier to observe and handle. Those variables then fed into artificial neural networks using backpropagation method to recognise the given facial image. The test results show that PCA can provide high face recognition accuracy. For the training faces, a correct identification of 100% could be obtained. From some of network combinations that have been tested, a best average correct identification of 91,11% could be obtained for the test faces while the worst average result is 46,67 % correct identification Abstract in Bahasa Indonesia : Pengenalan wajah manusia merupakan salah satu bidang penelitian yang penting, dan dewasa ini banyak aplikasi yang dapat menerapkannya. Melalui pengembangan suatu teknik seperti Principal Components Analysis (PCA, komputer sekarang dapat melebihi kemampuan otak manusia dalam berbagai tugas pengenalan wajah, terutama tugas-tugas yang membutuhkan pencarian pada database wajah yang besar. Principal Components Analysis digunakan untuk mereduksi dimensi gambar wajah sehingga menghasilkan variabel yang lebih sedikit yang lebih mudah untuk diobsevasi dan ditangani. Hasil yang diperoleh kemudian akan dimasukkan ke suatu jaringan saraf tiruan dengan metode Backpropagation untuk mengenali gambar wajah yang telah diinputkan ke dalam sistem. Hasil pengujian sistem menunjukkan bahwa penggunaan PCA untuk pengenalan wajah dapat memberikan tingkat akurasi yang cukup tinggi. Untuk gambar wajah yang diikutsertakankan dalam latihan, dapat diperoleh 100% identifikasi yang benar. Dari beberapa kombinasi jaringan yang

  14. Portable XRF and principal component analysis for bill characterization in forensic science.

    Science.gov (United States)

    Appoloni, C R; Melquiades, F L

    2014-02-01

    Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  16. Independent component analysis based digital signal processing in coherent optical fiber communication systems

    Science.gov (United States)

    Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi

    2018-02-01

    In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.

  17. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2010-05-01

    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  18. Power Transformer Differential Protection Based on Neural Network Principal Component Analysis, Harmonic Restraint and Park's Plots

    OpenAIRE

    Tripathy, Manoj

    2012-01-01

    This paper describes a new approach for power transformer differential protection which is based on the wave-shape recognition technique. An algorithm based on neural network principal component analysis (NNPCA) with back-propagation learning is proposed for digital differential protection of power transformer. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to disc...

  19. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Laboratory (INL), Idaho Falls, ID (United States); O' Brien, James E. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); McKellar, Michael G. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Housley, Gregory K. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial

  20. Classification of calcium supplements through application of principal component analysis: a study by inaa and aas

    International Nuclear Information System (INIS)

    Waheed, S.; Rahman, S.; Siddique, N.

    2013-01-01

    Different types of Ca supplements are available in the local markets of Pakistan. It is sometimes difficult to classify these with respect to their composition. In the present work principal component analysis (PCA) technique was applied to classify different Ca supplements on the basis of their elemental data obtained using instrumental neutron activation analysis (INAA) and atomic absorption spectrometry (AAS) techniques. The graphical representation of principal component analysis (PCA) scores utilizing intricate analytical data successfully generated four different types of Ca supplements with compatible samples grouped together. These included Ca supplements with CaCO/sub 3/as Ca source along with vitamin C, the supplements with CaCO/sub 3/ as Ca source along with vitamin D, Supplements with Ca from bone meal and supplements with chelated calcium. (author)

  1. Running Technique is an Important Component of Running Economy and Performance

    Science.gov (United States)

    FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.

    2017-01-01

    ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance

  2. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  3. Comparison of common components analysis with principal components analysis and independent components analysis: Application to SPME-GC-MS volatolomic signatures.

    Science.gov (United States)

    Bouhlel, Jihéne; Jouan-Rimbaud Bouveresse, Delphine; Abouelkaram, Said; Baéza, Elisabeth; Jondreville, Catherine; Travel, Angélique; Ratel, Jérémy; Engel, Erwan; Rutledge, Douglas N

    2018-02-01

    The aim of this work is to compare a novel exploratory chemometrics method, Common Components Analysis (CCA), with Principal Components Analysis (PCA) and Independent Components Analysis (ICA). CCA consists in adapting the multi-block statistical method known as Common Components and Specific Weights Analysis (CCSWA or ComDim) by applying it to a single data matrix, with one variable per block. As an application, the three methods were applied to SPME-GC-MS volatolomic signatures of livers in an attempt to reveal volatile organic compounds (VOCs) markers of chicken exposure to different types of micropollutants. An application of CCA to the initial SPME-GC-MS data revealed a drift in the sample Scores along CC2, as a function of injection order, probably resulting from time-related evolution in the instrument. This drift was eliminated by orthogonalization of the data set with respect to CC2, and the resulting data are used as the orthogonalized data input into each of the three methods. Since the first step in CCA is to norm-scale all the variables, preliminary data scaling has no effect on the results, so that CCA was applied only to orthogonalized SPME-GC-MS data, while, PCA and ICA were applied to the "orthogonalized", "orthogonalized and Pareto-scaled", and "orthogonalized and autoscaled" data. The comparison showed that PCA results were highly dependent on the scaling of variables, contrary to ICA where the data scaling did not have a strong influence. Nevertheless, for both PCA and ICA the clearest separations of exposed groups were obtained after autoscaling of variables. The main part of this work was to compare the CCA results using the orthogonalized data with those obtained with PCA and ICA applied to orthogonalized and autoscaled variables. The clearest separations of exposed chicken groups were obtained by CCA. CCA Loadings also clearly identified the variables contributing most to the Common Components giving separations. The PCA Loadings did not

  4. Structure analysis of active components of traditional Chinese medicines

    DEFF Research Database (Denmark)

    Zhang, Wei; Sun, Qinglei; Liu, Jianhua

    2013-01-01

    Traditional Chinese Medicines (TCMs) have been widely used for healing of different health problems for thousands of years. They have been used as therapeutic, complementary and alternative medicines. TCMs usually consist of dozens to hundreds of various compounds, which are extracted from raw...... herbal sources by aqueous or alcoholic solvents. Therefore, it is difficult to correlate the pharmaceutical effect to a specific lead compound in the TCMs. A detailed analysis of various components in TCMs has been a great challenge for modern analytical techniques in recent decades. In this chapter...

  5. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  6. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  7. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  8. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  9. Demixed principal component analysis of neural population data.

    Science.gov (United States)

    Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K

    2016-04-12

    Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

  10. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  11. Nonlinear principal component analysis and its applications

    CERN Document Server

    Mori, Yuichi; Makino, Naomichi

    2016-01-01

    This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...

  12. Independent component analysis classification of laser induced breakdown spectroscopy spectra

    International Nuclear Information System (INIS)

    Forni, Olivier; Maurice, Sylvestre; Gasnault, Olivier; Wiens, Roger C.; Cousin, Agnès; Clegg, Samuel M.; Sirven, Jean-Baptiste; Lasue, Jérémie

    2013-01-01

    The ChemCam instrument on board Mars Science Laboratory (MSL) rover uses the laser-induced breakdown spectroscopy (LIBS) technique to remotely analyze Martian rocks. It retrieves spectra up to a distance of seven meters to quantify and to quantitatively analyze the sampled rocks. Like any field application, on-site measurements by LIBS are altered by diverse matrix effects which induce signal variations that are specific to the nature of the sample. Qualitative aspects remain to be studied, particularly LIBS sample identification to determine which samples are of interest for further analysis by ChemCam and other rover instruments. This can be performed with the help of different chemometric methods that model the spectra variance in order to identify a the rock from its spectrum. In this paper we test independent components analysis (ICA) rock classification by remote LIBS. We show that using measures of distance in ICA space, namely the Manhattan and the Mahalanobis distance, we can efficiently classify spectra of an unknown rock. The Mahalanobis distance gives overall better performances and is easier to manage than the Manhattan distance for which the determination of the cut-off distance is not easy. However these two techniques are complementary and their analytical performances will improve with time during MSL operations as the quantity of available Martian spectra will grow. The analysis accuracy and performances will benefit from a combination of the two approaches. - Highlights: • We use a novel independent component analysis method to classify LIBS spectra. • We demonstrate the usefulness of ICA. • We report the performances of the ICA classification. • We compare it to other classical classification schemes

  13. PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS

    Directory of Open Access Journals (Sweden)

    Hermita Bus Umar

    2009-03-01

    Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.

  14. A new methodology based on functional principal component analysis to study postural stability post-stroke.

    Science.gov (United States)

    Sánchez-Sánchez, M Luz; Belda-Lois, Juan-Manuel; Mena-Del Horno, Silvia; Viosca-Herrero, Enrique; Igual-Camacho, Celedonia; Gisbert-Morant, Beatriz

    2018-05-05

    A major goal in stroke rehabilitation is the establishment of more effective physical therapy techniques to recover postural stability. Functional Principal Component Analysis provides greater insight into recovery trends. However, when missing values exist, obtaining functional data presents some difficulties. The purpose of this study was to reveal an alternative technique for obtaining the Functional Principal Components without requiring the conversion to functional data beforehand and to investigate this methodology to determine the effect of specific physical therapy techniques in balance recovery trends in elderly subjects with hemiplegia post-stroke. A randomized controlled pilot trial was developed. Thirty inpatients post-stroke were included. Control and target groups were treated with the same conventional physical therapy protocol based on functional criteria, but specific techniques were added to the target group depending on the subjects' functional level. Postural stability during standing was quantified by posturography. The assessments were performed once a month from the moment the participants were able to stand up to six months post-stroke. The target group showed a significant improvement in postural control recovery trend six months after stroke that was not present in the control group. Some of the assessed parameters revealed significant differences between treatment groups (P Functional Principal Component Analysis to be performed when data is scarce. Moreover, it allowed the dynamics of recovery of two different treatment groups to be determined, showing that the techniques added in the target group increased postural stability compared to the base protocol. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. A Principal Component Analysis of Skills and Competencies Required of Quantity Surveyors: Nigerian Perspective

    OpenAIRE

    Oluwasuji Dada, Joshua

    2014-01-01

    The purpose of this paper is to examine the intrinsic relationships among sets of quantity surveyors’ skill and competence variables with a view to reducing them into principal components. The research adopts a data reduction technique using factor analysis statistical technique. A structured questionnaire was administered among major stakeholders in the Nigerian construction industry. The respondents were asked to give rating, on a 5 point Likert scale, on skills and competencies re...

  16. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  17. Fault tree technique: advances in probabilistic and logical analysis

    International Nuclear Information System (INIS)

    Clarotti, C.A.; Amendola, A.; Contini, S.; Squellati, G.

    1982-01-01

    Fault tree reliability analysis is used for assessing the risk associated to systems of increasing complexity (phased mission systems, systems with multistate components, systems with non-monotonic structure functions). Much care must be taken to make sure that fault tree technique is not used beyond its correct validity range. To this end a critical review of mathematical foundations of reliability fault tree analysis is carried out. Limitations are enlightened and potential solutions to open problems are suggested. Moreover an overview is given on the most recent developments in the implementation of an integrated software (SALP-MP, SALP-NOT, SALP-CAFT Codes) for the analysis of a wide class of systems

  18. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  19. APPLYING PRINCIPAL COMPONENT ANALYSIS, MULTILAYER PERCEPTRON AND SELF-ORGANIZING MAPS FOR OPTICAL CHARACTER RECOGNITION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-11-01

    Full Text Available Optical Character Recognition plays an important role in data storage and data mining when the number of documents stored as images is increasing. It is expected to find the ways to convert images of typewritten or printed text into machine-encoded text effectively in order to support for the process of information handling effectively. In this paper, therefore, the techniques which are being used to convert image into editable text in the computer such as principal component analysis, multilayer perceptron network, self-organizing maps, and improved multilayer neural network using principal component analysis are experimented. The obtained results indicated the effectiveness and feasibility of the proposed methods.

  20. Recent development in mass spectrometry and its hyphenated techniques for the analysis of medicinal plants.

    Science.gov (United States)

    Zhu, Ming-Zhi; Chen, Gui-Lin; Wu, Jian-Lin; Li, Na; Liu, Zhong-Hua; Guo, Ming-Quan

    2018-04-23

    Medicinal plants are gaining increasing attention worldwide due to their empirical therapeutic efficacy and being a huge natural compound pool for new drug discovery and development. The efficacy, safety and quality of medicinal plants are the main concerns, which are highly dependent on the comprehensive analysis of chemical components in the medicinal plants. With the advances in mass spectrometry (MS) techniques, comprehensive analysis and fast identification of complex phytochemical components have become feasible, and may meet the needs, for the analysis of medicinal plants. Our aim is to provide an overview on the latest developments in MS and its hyphenated technique and their applications for the comprehensive analysis of medicinal plants. Application of various MS and its hyphenated techniques for the analysis of medicinal plants, including but not limited to one-dimensional chromatography, multiple-dimensional chromatography coupled to MS, ambient ionisation MS, and mass spectral database, have been reviewed and compared in this work. Recent advancs in MS and its hyphenated techniques have made MS one of the most powerful tools for the analysis of complex extracts from medicinal plants due to its excellent separation and identification ability, high sensitivity and resolution, and wide detection dynamic range. To achieve high-throughput or multi-dimensional analysis of medicinal plants, the state-of-the-art MS and its hyphenated techniques have played, and will continue to play a great role in being the major platform for their further research in order to obtain insight into both their empirical therapeutic efficacy and quality control. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Sodium removal, storage, and requalification of components

    International Nuclear Information System (INIS)

    Gallegos, A.; Shimazaki, T.; Oliva, R.M.

    1974-01-01

    The objectives of this program are to devise, develop, test, and evaluate techniques for sodium removal and storage of test specimens and components, and to expand and refine, by test and analysis, the sodium removal and storage techniques and procedures for use in processing typical LMFBR components

  2. Motor current and leakage flux signature analysis technique for condition monitoring

    International Nuclear Information System (INIS)

    Pillai, M.V.; Moorthy, R.I.K.; Mahajan, S.C.

    1994-01-01

    Till recently analysis of vibration signals was the only means available to predict the state of health of plant equipment. Motor current and leakage magnetic flux signature analysis is acquiring importance as a technique for detection of incipient damages in the electrical machines and as a supplementary technique for diagnostics of driven equipment such as centrifugal and reciprocating pumps. The state of health of the driven equipment is assessed by analysing time signal, frequency spectrum and trend analysis. For example, the pump vane frequency, piston stroke frequency, gear frequency and bearing frequencies are indicated in the current and flux spectra. By maintaining a periodic record of the amplitudes of various frequency lines in the frequency spectra, it is possible to understand the trend of deterioration of parts and components of the pump. All problems arising out of inappropriate mechanical alignment of vertical pumps are easily identified by a combined analysis of current, flux and vibration signals. It is found that current signature analysis technique is a sufficient method in itself for the analysis of state of health of reciprocating pumps and compressors. (author). 10 refs., 4 figs

  3. Predicting timing of foot strike during running, independent of striking technique, using principal component analysis of joint angles.

    Science.gov (United States)

    Osis, Sean T; Hettinga, Blayne A; Leitch, Jessica; Ferber, Reed

    2014-08-22

    As 3-dimensional (3D) motion-capture for clinical gait analysis continues to evolve, new methods must be developed to improve the detection of gait cycle events based on kinematic data. Recently, the application of principal component analysis (PCA) to gait data has shown promise in detecting important biomechanical features. Therefore, the purpose of this study was to define a new foot strike detection method for a continuum of striking techniques, by applying PCA to joint angle waveforms. In accordance with Newtonian mechanics, it was hypothesized that transient features in the sagittal-plane accelerations of the lower extremity would be linked with the impulsive application of force to the foot at foot strike. Kinematic and kinetic data from treadmill running were selected for 154 subjects, from a database of gait biomechanics. Ankle, knee and hip sagittal plane angular acceleration kinematic curves were chained together to form a row input to a PCA matrix. A linear polynomial was calculated based on PCA scores, and a 10-fold cross-validation was performed to evaluate prediction accuracy against gold-standard foot strike as determined by a 10 N rise in the vertical ground reaction force. Results show 89-94% of all predicted foot strikes were within 4 frames (20 ms) of the gold standard with the largest error being 28 ms. It is concluded that this new foot strike detection is an improvement on existing methods and can be applied regardless of whether the runner exhibits a rearfoot, midfoot, or forefoot strike pattern. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Damage detection methodology under variable load conditions based on strain field pattern recognition using FBGs, nonlinear principal component analysis, and clustering techniques

    Science.gov (United States)

    Sierra-Pérez, Julián; Torres-Arredondo, M.-A.; Alvarez-Montoya, Joham

    2018-01-01

    Structural health monitoring consists of using sensors integrated within structures together with algorithms to perform load monitoring, damage detection, damage location, damage size and severity, and prognosis. One possibility is to use strain sensors to infer structural integrity by comparing patterns in the strain field between the pristine and damaged conditions. In previous works, the authors have demonstrated that it is possible to detect small defects based on strain field pattern recognition by using robust machine learning techniques. They have focused on methodologies based on principal component analysis (PCA) and on the development of several unfolding and standardization techniques, which allow dealing with multiple load conditions. However, before a real implementation of this approach in engineering structures, changes in the strain field due to conditions different from damage occurrence need to be isolated. Since load conditions may vary in most engineering structures and promote significant changes in the strain field, it is necessary to implement novel techniques for uncoupling such changes from those produced by damage occurrence. A damage detection methodology based on optimal baseline selection (OBS) by means of clustering techniques is presented. The methodology includes the use of hierarchical nonlinear PCA as a nonlinear modeling technique in conjunction with Q and nonlinear-T 2 damage indices. The methodology is experimentally validated using strain measurements obtained by 32 fiber Bragg grating sensors bonded to an aluminum beam under dynamic bending loads and simultaneously submitted to variations in its pitch angle. The results demonstrated the capability of the methodology for clustering data according to 13 different load conditions (pitch angles), performing the OBS and detecting six different damages induced in a cumulative way. The proposed methodology showed a true positive rate of 100% and a false positive rate of 1.28% for a

  5. Comparison of Spares Logistics Analysis Techniques for Long Duration Human Spaceflight

    Science.gov (United States)

    Owens, Andrew; de Weck, Olivier; Mattfeld, Bryan; Stromgren, Chel; Cirillo, William

    2015-01-01

    As the durations and distances involved in human exploration missions increase, the logistics associated with the repair and maintenance becomes more challenging. Whereas the operation of the International Space Station (ISS) depends upon regular resupply from the Earth, this paradigm may not be feasible for future missions. Longer mission durations result in higher probabilities of component failures as well as higher uncertainty regarding which components may fail, and longer distances from Earth increase the cost of resupply as well as the speed at which the crew can abort to Earth in the event of an emergency. As such, mission development efforts must take into account the logistics requirements associated with maintenance and spares. Accurate prediction of the spare parts demand for a given mission plan and how that demand changes as a result of changes to the system architecture enables full consideration of the lifecycle cost associated with different options. In this paper, we utilize a range of analysis techniques - Monte Carlo, semi-Markov, binomial, and heuristic - to examine the relationship between the mass of spares and probability of loss of function related to the Carbon Dioxide Removal System (CRS) for a notional, simplified mission profile. The Exploration Maintainability Analysis Tool (EMAT), developed at NASA Langley Research Center, is utilized for the Monte Carlo analysis. We discuss the implications of these results and the features and drawbacks of each method. In particular, we identify the limitations of heuristic methods for logistics analysis, and the additional insights provided by more in-depth techniques. We discuss the potential impact of system complexity on each technique, as well as their respective abilities to examine dynamic events. This work is the first step in an effort that will quantitatively examine how well these techniques handle increasingly more complex systems by gradually expanding the system boundary.

  6. A first application of independent component analysis to extracting structure from stock returns.

    Science.gov (United States)

    Back, A D; Weigend, A S

    1997-08-01

    This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).

  7. A multi-dimensional functional principal components analysis of EEG data.

    Science.gov (United States)

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-09-01

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations. © 2017, The International Biometric Society.

  8. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    AFRL-AFOSR-JP-TR-2017-0018 Improvement of Binary Analysis Components in Automated Malware Analysis Framework Keiji Takeda KEIO UNIVERSITY Final...TYPE Final 3. DATES COVERED (From - To) 26 May 2015 to 25 Nov 2016 4. TITLE AND SUBTITLE Improvement of Binary Analysis Components in Automated Malware ...analyze malicious software ( malware ) with minimum human interaction. The system autonomously analyze malware samples by analyzing malware binary program

  9. Response spectrum analysis of coupled structural response to a three component seismic disturbance

    International Nuclear Information System (INIS)

    Boulet, J.A.M.; Carley, T.G.

    1977-01-01

    The work discussed herein is a comparison and evaluation of several response spectrum analysis (RSA) techniques as applied to the same structural model with seismic excitation having three spatial components. The structural model includes five lumped masses (floors) connected by four elastic members. The base is supported by three translational springs and two horizontal torsional springs. In general, the mass center and shear center of a building floor are distinct locations. Hence, inertia forces, which act at the mass center, induce twisting in the structure. Through this induced torsion, the lateral (x and y) displacements of the mass elements are coupled. The ground motion components used for this study are artificial earthquake records generated from recorded accelerograms by a spectrum modification technique. The accelerograms have response spectra which are compatible with U.S. NRC Regulatory Guide 1.60. Lagrange's equations of motion for the system were written in matrix form and uncoupled with the modal matrix. Numerical integration (fourth order Runge-Kutta) of the resulting modal equations produced time histories of system displacements in response to simultaneous application of three orthogonal components of ground motion, and displacement response spectra for each modal coordinate in response to each of the three ground motion components. Five different RSA techniques were used to combine the spectral displacements and the modal matrix to give approximations of maximum system displacements. These approximations were then compared with the maximum system displacements taken from the time histories. The RSA techniques used are the method of absolute sums, the square root of the sum of the sum of the squares, the double sum approach, the method of closely spaced modes, and Lin's method

  10. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  11. In-situ measurement of mechanical properties of structural components using cyclic ball indentation technique

    International Nuclear Information System (INIS)

    Chatterjee, S.; Madhusoodanan, K.; Panwar, Sanjay; Rupani, B.B.

    2007-01-01

    Material properties of components change during service due to environmental conditions. Measurement of mechanical properties of the components is important for assessing their fitness for service. In many instances, it is not possible to remove sizable samples from the component for doing the measurement in laboratory. In-situ technique for measurement of mechanical properties has great significance in such cases. One of the nondestructive methods that can be adopted for in-situ application is based on cyclic ball indentation technique. It involves multiple indentation cycles (at the same penetration location) on a metallic surface by a spherical indenter. Each cycle consists of indentation, partial unload and reload sequences. Presently, commercial systems are available for doing indentation test on structural component for limited applications. But, there is a genuine need of remotely operable compact in-situ property measurement system. Considering the importance of such applications Reactor Engineering Division of BARC has developed an In-situ Property Measurement System (IProMS), which can be used for in-situ measurement of mechanical properties of a flat or tubular component. This paper highlights the basic theory of measurement, qualification tests on IProMS and results from tests done on flat specimens and tubular component. (author)

  12. Non-destructive test of lock actuator component using neutron radiography technique

    International Nuclear Information System (INIS)

    Juliyanti; Setiawan; Sutiarso

    2012-01-01

    Non-destructive test of lock actuator using neutron radiography technique has been done. The lock actuator is a mechanical system which is controlled by central lock module consisting of electronic circuit which drives the lock actuator works accordingly to open and lock the vehicle door. The non-destructive test using neutron radiography is carried out to identify the type of defect is presence by comparing between the broken and the brand new one. The method used to test the lock actuator component is film method (direct method). The result show that the radiography procedure has complied with the ASTM standard for neutron radiography with background density of 2.2, 7 lines and 3 holes was seen in the sensitivity indicator (SI) and the quite good image quality was obtained. In the brand new actuator is seen that isolator part which separated the coils has melted. By this non-destructive test using neutron radiography technique is able to detect in early stage the type of component's defect inside the lock actuator without to dismantle it. (author)

  13. Principal components

    NARCIS (Netherlands)

    Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.

    2012-01-01

    Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal

  14. Airborne electromagnetic data levelling using principal component analysis based on flight line difference

    Science.gov (United States)

    Zhang, Qiong; Peng, Cong; Lu, Yiming; Wang, Hao; Zhu, Kaiguang

    2018-04-01

    A novel technique is developed to level airborne geophysical data using principal component analysis based on flight line difference. In the paper, flight line difference is introduced to enhance the features of levelling error for airborne electromagnetic (AEM) data and improve the correlation between pseudo tie lines. Thus we conduct levelling to the flight line difference data instead of to the original AEM data directly. Pseudo tie lines are selected distributively cross profile direction, avoiding the anomalous regions. Since the levelling errors of selective pseudo tie lines show high correlations, principal component analysis is applied to extract the local levelling errors by low-order principal components reconstruction. Furthermore, we can obtain the levelling errors of original AEM data through inverse difference after spatial interpolation. This levelling method does not need to fly tie lines and design the levelling fitting function. The effectiveness of this method is demonstrated by the levelling results of survey data, comparing with the results from tie-line levelling and flight-line correlation levelling.

  15. Component-Level Electronic-Assembly Repair (CLEAR) Spacecraft Circuit Diagnostics by Analog and Complex Signature Analysis

    Science.gov (United States)

    Oeftering, Richard C.; Wade, Raymond P.; Izadnegahdar, Alain

    2011-01-01

    The Component-Level Electronic-Assembly Repair (CLEAR) project at the NASA Glenn Research Center is aimed at developing technologies that will enable space-flight crews to perform in situ component-level repair of electronics on Moon and Mars outposts, where there is no existing infrastructure for logistics spares. These technologies must provide effective repair capabilities yet meet the payload and operational constraints of space facilities. Effective repair depends on a diagnostic capability that is versatile but easy to use by crew members that have limited training in electronics. CLEAR studied two techniques that involve extensive precharacterization of "known good" circuits to produce graphical signatures that provide an easy-to-use comparison method to quickly identify faulty components. Analog Signature Analysis (ASA) allows relatively rapid diagnostics of complex electronics by technicians with limited experience. Because of frequency limits and the growing dependence on broadband technologies, ASA must be augmented with other capabilities. To meet this challenge while preserving ease of use, CLEAR proposed an alternative called Complex Signature Analysis (CSA). Tests of ASA and CSA were used to compare capabilities and to determine if the techniques provided an overlapping or complementary capability. The results showed that the methods are complementary.

  16. Optimized inspection techniques and structural analysis in lifetime management

    International Nuclear Information System (INIS)

    Aguado, M.T.; Marcelles, I.

    1993-01-01

    Preservation of the option of extending the service lifetime of a nuclear power plant beyond its normal design lifetime requires correct remaining lifetime management from the very beginning of plant operation. The methodology used in plant remaining lifetime management is essentially based on the use of standard inspections, surveillance and monitoring programs and calculations, such as thermal-stress and fracture mechanics analysis. The inspection techniques should be continuously optimized, in order to be able to detect and dimension existing defects with the highest possible degree of accuracy. The information obtained during the inspection is combined with the historical data of the components: design, quality, operation, maintenance, and transients, and with the results of destructive testing, fracture mechanics and thermal fatigue analysis. These data are used to estimate the remaining lifetime of nuclear power plant components, systems and structures with the highest degree possible of accuracy. The use of this methodology allows component repairs and replacements to be reduced or avoided and increases the safety levels and availability of the nuclear power plant. Use of this strategy avoids the need for heavy investments at the end of the licensing period

  17. Techniques of production and analysis of polarized synchrotron radiation

    International Nuclear Information System (INIS)

    Mills, D.M.

    1992-01-01

    The use of the unique polarization properties of synchrotron radiation in the hard x-ray spectral region (E>3 KeV) is becoming increasingly important to many synchrotron radiation researchers. The radiation emitted from bending magnets and conventional (planar) insertion devices (IDs) is highly linearly polarized in the plane of the particle's orbit. Elliptically polarized x-rays can also be obtained by going off axis on a bending magnet source, albeit with considerable loss of flux. The polarization properties of synchrotron radiation can be further tailored to the researcher's specific needs through the use of specialized insertion devices such as helical and crossed undulators and asymmetrical wigglers. Even with the possibility of producing a specific polarization, there is still the need to develop x-ray optical components which can manipulate the polarization for both analysis and further modification of the polarization state. A survey of techniques for producing and analyzing both linear and circular polarized x-rays will be presented with emphasis on those techniques which rely on single crystal optical components

  18. Infrared and millimeter waves v.14 millimeter components and techniques, pt.V

    CERN Document Server

    Button, Kenneth J

    1985-01-01

    Infrared and Millimeter Waves, Volume 14: Millimeter Components and Techniques, Part V is concerned with millimeter-wave guided propagation and integrated circuits. In addition to millimeter-wave planar integrated circuits and subsystems, this book covers transducer configurations and integrated-circuit techniques, antenna arrays, optoelectronic devices, and tunable gyrotrons. Millimeter-wave gallium arsenide (GaAs) IMPATT diodes are also discussed. This monograph is comprised of six chapters and begins with a description of millimeter-wave integrated-circuit transducers, focusing on vario

  19. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    Science.gov (United States)

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  20. Registration of dynamic dopamine D2receptor images using principal component analysis

    International Nuclear Information System (INIS)

    Acton, P.D.; Ell, P.J.; Pilowsky, L.S.; Brammer, M.J.; Suckling, J.

    1997-01-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D 2 receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D 2 receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P 123 I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the χ 2 of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.)

  1. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  2. Advanced analysis technique for the evaluation of linear alternators and linear motors

    Science.gov (United States)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  3. Detecting 3-D rotational motion and extracting target information from the principal component analysis of scatterer range histories

    CSIR Research Space (South Africa)

    Nel, W

    2009-10-01

    Full Text Available to estimate the 3-D position of scatterers as a by-product of the analysis. The technique is based on principal component analysis of accurate scatterer range histories and is shown only in simulation. Future research should focus on practical application....

  4. Processing of spectral X-ray data with principal components analysis

    CERN Document Server

    Butler, A P H; Cook, N J; Butzer, J; Schleich, N; Tlustos, L; Scott, N; Grasset, R; de Ruiter, N; Anderson, N G

    2011-01-01

    The goal of the work was to develop a general method for processing spectral x-ray image data. Principle component analysis (PCA) is a well understood technique for multivariate data analysis and so was investigated. To assess this method, spectral (multi-energy) computed tomography (CT) data was obtained using a Medipix2 detector in a MARS-CT (Medipix All Resolution System). PCA was able to separate bone (calcium) from two elements with k-edges in the X-ray spectrum used (iodine and barium) within a mouse. This has potential clinical application in dual-energy CT systems and future Medipix3 based spectral imaging where up to eight energies can be recorded simultaneously with excellent energy resolution. (c) 2010 Elsevier B.V. All rights reserved.

  5. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  6. Spectroscopic techniques in the study of human tissues and their components. Part I: IR spectroscopy.

    Science.gov (United States)

    Olsztyńska-Janus, Sylwia; Szymborska-Małek, Katarzyna; Gąsior-Głogowska, Marlena; Walski, Tomasz; Komorowska, Małgorzata; Witkiewicz, Wojciech; Pezowicz, Celina; Kobielarz, Magdalena; Szotek, Sylwia

    2012-01-01

    Among the currently used methods of monitoring human tissues and their components many types of research are distinguished. These include spectroscopic techniques. The advantage of these techniques is the small amount of sample required, the rapid process of recording the spectra, and most importantly in the case of biological samples - preparation of tissues is not required. In this work, vibrational spectroscopy: ATR-FTIR and Raman spectroscopy will be used. Studies are carried out on tissues: tendons, blood vessels, skin, red blood cells and biological components: amino acids, proteins, DNA, plasma, and deposits.

  7. Key components of financial-analysis education for clinical nurses.

    Science.gov (United States)

    Lim, Ji Young; Noh, Wonjung

    2015-09-01

    In this study, we identified key components of financial-analysis education for clinical nurses. We used a literature review, focus group discussions, and a content validity index survey to develop key components of financial-analysis education. First, a wide range of references were reviewed, and 55 financial-analysis education components were gathered. Second, two focus group discussions were performed; the participants were 11 nurses who had worked for more than 3 years in a hospital, and nine components were agreed upon. Third, 12 professionals, including professors, nurse executive, nurse managers, and an accountant, participated in the content validity index. Finally, six key components of financial-analysis education were selected. These key components were as follows: understanding the need for financial analysis, introduction to financial analysis, reading and implementing balance sheets, reading and implementing income statements, understanding the concepts of financial ratios, and interpretation and practice of financial ratio analysis. The results of this study will be used to develop an education program to increase financial-management competency among clinical nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  8. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  9. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  10. Identifying the Component Structure of Satisfaction Scales by Nonlinear Principal Components Analysis

    NARCIS (Netherlands)

    Manisera, M.; Kooij, A.J. van der; Dusseldorp, E.

    2010-01-01

    The component structure of 14 Likert-type items measuring different aspects of job satisfaction was investigated using nonlinear Principal Components Analysis (NLPCA). NLPCA allows for analyzing these items at an ordinal or interval level. The participants were 2066 workers from five types of social

  11. Identifying constituents in commercial gasoline using Fourier transform-infrared spectroscopy and independent component analysis.

    Science.gov (United States)

    Pasadakis, Nikos; Kardamakis, Andreas A

    2006-09-25

    A new method is proposed that enables the identification of five refinery fractions present in commercial gasoline mixtures using infrared spectroscopic analysis. The data analysis and interpretation was carried out based on independent component analysis (ICA) and spectral similarity techniques. The FT-IR spectra of the gasoline constituents were determined using the ICA method, exclusively based on the spectra of their mixtures as a blind separation procedure, i.e. assuming unknown the spectra of the constituents. The identity of the constituents was subsequently determined using similarity measures commonly employed in spectra library searches against the spectra of the constituent components. The high correlation scores that were obtained in the identification of the constituents indicates that the developed method can be employed as a rapid and effective tool in quality control, fingerprinting or forensic applications, where gasoline constituents are suspected.

  12. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  13. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  14. Physicochemical properties of different corn varieties by principal components analysis and cluster analysis

    International Nuclear Information System (INIS)

    Zeng, J.; Li, G.; Sun, J.

    2013-01-01

    Principal components analysis and cluster analysis were used to investigate the properties of different corn varieties. The chemical compositions and some properties of corn flour which processed by drying milling were determined. The results showed that the chemical compositions and physicochemical properties were significantly different among twenty six corn varieties. The quality of corn flour was concerned with five principal components from principal component analysis and the contribution rate of starch pasting properties was important, which could account for 48.90%. Twenty six corn varieties could be classified into four groups by cluster analysis. The consistency between principal components analysis and cluster analysis indicated that multivariate analyses were feasible in the study of corn variety properties. (author)

  15. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  16. Development of non-destructive examination techniques for CFC-metal joints in annular geometry and their application to the manufacturing of plasma-facing components

    International Nuclear Information System (INIS)

    Di Pietro, E.; Visca, E.; Orsini, A.; Sacchetti, M.; Borruto, T.M.R.; Varone, P.; Vesprini, R.

    1995-01-01

    The design of plasma-facing components for ITER, as for any of the envisaged next-step machines, relies heavily on the use of brazed junctions to couple armour materials to the heat sink and cooling tubes. Moreover, the typical number of brazed components and the envisaged effects of local overheating due to failure in a single brazed junction stress the importance of having a set of NDE techniques developed that can ensure the flawless quality of the joint. The qualification and application of two NDE techniques (ultrasonic and thermographic analysis) for inspection of CFC-to-metal joints is described with particular regard to the annular geometry typical of macroblock/monoblock solutions for divertor high-heat-flux components. The results of the eddy current inspection are not reported. The development has been focused specifically on the joint between carbon-fiber composite and TZM molybdenum alloy; techniques for the production of reference defect samples have been devised and a set of reference defect samples produced. The comparative results of the NDE inspections are reported and discussed, also on the basis of the destructive examination of the samples. The nature and size of relevant and detectable defects are discussed together with hints for a possible NDE strategy for divertor high-heat-flux components

  17. Optimization of armour geometry and bonding techniques for tungsten-armoured high heat flux components

    International Nuclear Information System (INIS)

    Giniyatulin, R.N.; Komarov, V.L.; Kuzmin, E.G.; Makhankov, A.N.; Mazul, I.V.; Yablokov, N.A.; Zhuk, A.N.

    2002-01-01

    Joining of tungsten with copper-based cooling structure and armour geometry optimization are the major aspects in development of the tungsten-armoured plasma facing components (PFC). Fabrication techniques and high heat flux (HHF) tests of tungsten-armoured components have to reflect different PFC designs and acceptable manufacturing cost. The authors present the recent results of tungsten-armoured mock-ups development based on manufacturing and HHF tests. Two aspects were investigated--selection of armour geometry and examination of tungsten-copper bonding techniques. Brazing and casting tungsten-copper bonding techniques were used in small mock-ups. The mock-ups with armour tiles (20x5x10, 10x10x10, 20x20x10, 27x27x10) mm 3 in dimensions were tested by cyclic heat fluxes in the range of (5-20) MW/m 2 , the number of thermal cycles varied from hundreds to several thousands for each mock-up. The results of the tests show the applicability of different geometry and different bonding technique to corresponding heat loading. A medium-scale mock-up 0.6-m in length was manufactured and tested. HHF tests of the medium-scale mock-up have demonstrated the applicability of the applied bonding techniques and armour geometry for full-scale PFC's manufacturing

  18. Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis

    Science.gov (United States)

    Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.; hide

    2015-01-01

    The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

  19. F4E studies for the electromagnetic analysis of ITER components

    Energy Technology Data Exchange (ETDEWEB)

    Testoni, P., E-mail: pietro.testoni@f4e.europa.eu [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Cau, F.; Portone, A. [Fusion for Energy, Torres Diagonal Litoral B3, c/ Josep Plá n.2, Barcelona (Spain); Albanese, R. [Associazione EURATOM/ENEA/CREATE, DIETI, Università Federico II di Napoli, Napoli (Italy); Juirao, J. [Numerical Analysis TEChnologies S.L. (NATEC), c/ Marqués de San Esteban, 52 Entlo D Gijón (Spain)

    2014-10-15

    Highlights: • Several ITER components have been analyzed from the electromagnetic point of view. • Categorization of DINA load cases is described. • VDEs, MDs and MFD have been studied. • Integral values of forces and moments components versus time have been computed for all the ITER components under study. - Abstract: Fusion for Energy (F4E) is involved in a relevant number of activities in the area of electromagnetic analysis in support of ITER general design and EU in-kind procurement. In particular several ITER components (vacuum vessel, blanket shield modules and first wall panels, test blanket modules, ICRH antenna) are being analyzed from the electromagnetic point of view. In this paper we give an updated description of our main activities, highlighting the main assumptions, objectives, results and conclusions. The plasma instabilities we consider, typically disruptions and VDEs, can be both toroidally symmetric and asymmetric. This implies that, depending on the specific component and loading conditions, FE models we use span from a sector of 10 up to 360° of the ITER machine. The techniques for simulating the electromagnetic phenomena involved in a disruption and the postprocessing of the results to obtain the loads acting on the structures are described. Finally we summarize the typical loads applied to different components and give a critical view of the results.

  20. Reliability Analysis of Load-Sharing K-out-of-N System Considering Component Degradation

    Directory of Open Access Journals (Sweden)

    Chunbo Yang

    2015-01-01

    Full Text Available The K-out-of-N configuration is a typical form of redundancy techniques to improve system reliability, where at least K-out-of-N components must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value of K has to increase. The current reliability analysis methods overestimate the reliability, because using constant K ignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharing K-out-of-N system with degrading components. The proposed method considers the value of K as a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution of K. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.

  1. Technique for ultrasonic testing of austenitic steel weldments of NPP components

    International Nuclear Information System (INIS)

    Lantukh, V.M.; Grebennik, V.S.; Kordinov, E.V.; Kesler, N.A.; Shchedrin, I.F.

    1987-01-01

    Special literature on ultrasonic testing of weldments of austenitic steel is analysed. Technique for ultrasonic testing of the ring and longitudinal butt welded joints of NPP components without reinforcing bead removal is described. Special converter design and fabrication practice are described. Results of experimental check of the developed testing technology and its application during NNPs' mounting and operation are presented. Results of ultrasonic and X-ray testing are compared

  2. X-ray fluorescence beamline at LNLS: components and some associated techniques

    International Nuclear Information System (INIS)

    Perez, CArlos A.; Radtke, Martin; Perez, Carlos; Tolentino, Helio; Vicentin, Flavio; Sanchez, Hector Jorge; Perez, Roberto D.

    1997-01-01

    Full text. In this work a general description of the Total Reflection X-Ray Fluorescence (TXRF) and the X-Ray Fluorescence Microprobe (XRFM) is presented. Components, equipment and experimental stations for the x-ray fluorescence beamline are described, regarding to the techniques mentioned above. Results from the simulations of a pair bended mirrors in a Kirkpatrick-Baez configuration, are shown. The simulations were performed with Shadow program. (author)

  3. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  4. Principal Component Analysis-Based Pattern Analysis of Dose-Volume Histograms and Influence on Rectal Toxicity

    International Nuclear Information System (INIS)

    Soehn, Matthias; Alber, Markus; Yan Di

    2007-01-01

    Purpose: The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. Methods and Materials: PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as 'eigenmodes,' which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Results: Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe ∼94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses (∼40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. Conclusions: PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches

  5. Structured Performance Analysis for Component Based Systems

    OpenAIRE

    Salmi , N.; Moreaux , Patrice; Ioualalen , M.

    2012-01-01

    International audience; The Component Based System (CBS) paradigm is now largely used to design software systems. In addition, performance and behavioural analysis remains a required step for the design and the construction of efficient systems. This is especially the case of CBS, which involve interconnected components running concurrent processes. % This paper proposes a compositional method for modeling and structured performance analysis of CBS. Modeling is based on Stochastic Well-formed...

  6. Principal Component Analysis Based Two-Dimensional (PCA-2D) Correlation Spectroscopy: PCA Denoising for 2D Correlation Spectroscopy

    International Nuclear Information System (INIS)

    Jung, Young Mee

    2003-01-01

    Principal component analysis based two-dimensional (PCA-2D) correlation analysis is applied to FTIR spectra of polystyrene/methyl ethyl ketone/toluene solution mixture during the solvent evaporation. Substantial amount of artificial noise were added to the experimental data to demonstrate the practical noise-suppressing benefit of PCA-2D technique. 2D correlation analysis of the reconstructed data matrix from PCA loading vectors and scores successfully extracted only the most important features of synchronicity and asynchronicity without interference from noise or insignificant minor components. 2D correlation spectra constructed with only one principal component yield strictly synchronous response with no discernible a asynchronous features, while those involving at least two or more principal components generated meaningful asynchronous 2D correlation spectra. Deliberate manipulation of the rank of the reconstructed data matrix, by choosing the appropriate number and type of PCs, yields potentially more refined 2D correlation spectra

  7. Determination of the optimal number of components in independent components analysis.

    Science.gov (United States)

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. [Ideas and methods of two-dimensional zebrafish model combined with chromatographic techniques in high-throughput screening of active anti-osteoporosis components of traditional Chinese medicines].

    Science.gov (United States)

    Wei, Ying-Jie; Jing, Li-Jun; Zhan, Yang; Sun, E; Jia, Xiao-Bin

    2014-05-01

    To break through the restrictions of the evaluation model and the quantity of compounds by using the two-dimensional zebrafish model combined with chromatographic techniques, and establish a new method for the high-throughput screening of active anti-osteoporosis components. According to the research group-related studies and relevant foreign literatures, on the basis of the fact that the zebrafish osteoporosis model could efficiently evaluate the activity, the zebrafish metabolism model could efficiently enrich metabolites and the chromatographic techniques could efficiently separate and analyze components of traditional Chinese medicines, we proposed that the inherent combination of the three methods is expected to efficiently decode in vivo and in vitro efficacious anti-osteoporosis materials of traditional Chinese medicines. The method makes it simple and efficient in the enrichment, separation and analysis on components of traditional Chinese medicines, particularly micro-components and metabolites and the screening anti-osteoporosis activity, fully reflects that efficacious materials of traditional Chinese medicines contain original components and metabolites, with characteristic of "multi-components, multi-targets and integral effect", which provides new ideas and methods for the early and rapid discovery of active anti-osteoporosis components of traditional Chinese medicines.

  9. Towards automatic analysis of dynamic radionuclide studies using principal-components factor analysis

    International Nuclear Information System (INIS)

    Nigran, K.S.; Barber, D.C.

    1985-01-01

    A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)

  10. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  11. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  12. Principle Component Analysis with Incomplete Data: A simulation of R pcaMethods package in Constructing an Environmental Quality Index with Missing Data

    Science.gov (United States)

    Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...

  13. Development of a simple method for classifying the degree of importance of components in nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2006-01-01

    In order to analyze large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a method of quantitatively and simply judging the significance of trouble information of overseas nuclear power plants was developed. (author)

  14. Independent component analysis using prior information for signal detection in a functional imaging system of the retina

    NARCIS (Netherlands)

    Barriga, E. Simon; Pattichis, Marios; Ts’o, Dan; Abramoff, Michael; Kardon, Randy; Kwon, Young; Soliz, Peter

    2011-01-01

    Independent component analysis (ICA) is a statistical technique that estimates a set of sources mixed by an unknown mixing matrix using only a set of observations. For this purpose, the only assumption is that the sources are statistically independent. In many applications, some information about

  15. Using Quantitative Data Analysis Techniques for Bankruptcy Risk Estimation for Corporations

    Directory of Open Access Journals (Sweden)

    Ştefan Daniel ARMEANU

    2012-01-01

    Full Text Available Diversification of methods and techniques for quantification and management of risk has led to the development of many mathematical models, a large part of which focused on measuring bankruptcy risk for businesses. In financial analysis there are many indicators which can be used to assess the risk of bankruptcy of enterprises but to make an assessment it is needed to reduce the number of indicators and this can be achieved through principal component, cluster and discriminant analyses techniques. In this context, the article aims to build a scoring function used to identify bankrupt companies, using a sample of companies listed on Bucharest Stock Exchange.

  16. Principal component analysis of solar flares in the soft X-ray flux

    International Nuclear Information System (INIS)

    Teuber, D.L.; Reichmann, E.J.; Wilson, R.M.; National Aeronautics and Space Administration, Huntsville, AL

    1979-01-01

    Principal component analysis is a technique for extracting the salient features from a mass of data. It applies, in particular, to the analysis of nonstationary ensembles. Computational schemes for this task require the evaluation of eigenvalues of matrices. We have used EISPACK Matrix Eigen System Routines on an IBM 360-75 to analyze full-disk proportional-counter data from the X-ray event analyzer (X-REA) which was part of the Skylab ATM/S-056 experiment. Empirical orthogonal functions have been derived for events in the soft X-ray spectrum between 2.5 and 20 A during different time frames between June 1973 and January 1974. Results indicate that approximately 90% of the cumulative power of each analyzed flare is contained in the largest eigenvector. The first two largest eigenvectors are sufficient for an empirical curve-fit through the raw data and a characterization of solar flares in the soft X-ray flux. Power spectra of the two largest eigenvectors reveal a previously reported periodicity of approximately 5 min. Similar signatures were also obtained from flares that are synchronized on maximum pulse-height when subjected to a principal component analysis. (orig.)

  17. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  18. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  19. Surface analysis the principal techniques

    CERN Document Server

    Vickerman, John C

    2009-01-01

    This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c

  20. EXAFS and principal component analysis : a new shell game

    International Nuclear Information System (INIS)

    Wasserman, S.

    1998-01-01

    The use of principal component (factor) analysis in the analysis EXAFS spectra is described. The components derived from EXAFS spectra share mathematical properties with the original spectra. As a result, the abstract components can be analyzed using standard EXAFS methodology to yield the bond distances and other coordination parameters. The number of components that must be analyzed is usually less than the number of original spectra. The method is demonstrated using a series of spectra from aqueous solutions of uranyl ions

  1. Measuring processes with opto-electronic semiconductor components

    International Nuclear Information System (INIS)

    1985-01-01

    This is a report on the state of commercially available semiconductor emitters and detectors for the visible, near, middle and remote infrared range. A survey is given on the distance, speed, flow and length measuring techniques using opto-electronic components. Automatic focussing, the use of light barriers, non-contact temperature measurements, spectroscopic gas, liquid and environmental measurement techniques and gas analysis in medical techniques show further applications of the new components. The modern concept of guided radiation in optical fibres and their use in system technology is briefly explained. (DG) [de

  2. Introduction. [usefulness of modern remote sensing techniques for studying components of California water resources

    Science.gov (United States)

    Colwell, R. N.

    1973-01-01

    Since May 1970, personnel on several campuses of the University of California have been conducting investigations which seek to determine the usefulness of modern remote sensing techniques for studying various components of California's earth resources complex. Emphasis has been given to California's water resources as exemplified by the Feather River project and other aspects of the California Water Plan. This study is designed to consider in detail the supply, demand, and impact relationships. The specific geographic areas studied are the Feather River drainage in northern California, the Chino-Riverside Basin and Imperial Valley areas in southern California, and selected portions of the west side of San Joaquin Valley in central California. An analysis is also given on how an effective benefit-cost study of remote sensing in relation to California's water resources might best be made.

  3. Demonstration of statistical approaches to identify component's ageing by operational data analysis-A case study for the ageing PSA network

    International Nuclear Information System (INIS)

    Rodionov, Andrei; Atwood, Corwin L.; Kirchsteiger, Christian; Patrik, Milan

    2008-01-01

    The paper presents some results of a case study on 'Demonstration of statistical approaches to identify the component's ageing by operational data analysis', which was done in the frame of the EC JRC Ageing PSA Network. Several techniques: visual evaluation, nonparametric and parametric hypothesis tests, were proposed and applied in order to demonstrate the capacity, advantages and limitations of statistical approaches to identify the component's ageing by operational data analysis. Engineering considerations are out of the scope of the present study

  4. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  5. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  6. Problems of stress analysis of fuelling machine head components

    International Nuclear Information System (INIS)

    Mathur, D.D.

    1975-01-01

    The problem of stress analysis of fuelling machine head components are discussed. To fulfil the functional requirements, the components are required to have certain shapes where stress problems cannot be matched to a catalogue of pre-determined solutions. The areas where complex systems of loading due to hydrostatic pressure, weight, moments and temperature gradients coupled with the intricate shapes of the components make it difficult to arrive at satisfactory solutions. Particularly, the analysis requirements of the magazine housing, end cover, gravloc clamps and centre support are highlighted. An experimental stress analysis programme together with a theoretical finite element analysis is perhaps the answer. (author)

  7. Detection and Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of GPS Time Series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2014-12-01

    A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle

  8. Support vector machine and principal component analysis for microarray data classification

    Science.gov (United States)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  9. Detailed Structural Analysis of Critical Wendelstein 7-X Magnet System Components

    International Nuclear Information System (INIS)

    Egorov, K.

    2006-01-01

    The Wendelstein 7-X (W7-X) stellarator experiment is presently under construction and assembly in Greifswald, Germany. The goal of the experiment is to verify that the stellarator magnetic confinement concept is a viable option for a fusion reactor. The complex W7-X magnet system requires a multi-level approach to structural analysis for which two types of finite element models are used: Firstly, global models having reasonably coarse meshes with a number of simplifications and assumptions, and secondly, local models with detailed meshes of critical regions and elements. Widely known sub-modelling technique with boundary conditions extracted from the global models is one of the approaches for local analysis with high assessment efficiency. In particular, the winding pack (WP) of the magnet coils is simulated in the global model as a homogeneous orthotropic material with effective mechanical characteristic representing its real composite structure. This assumption allows assessing the whole magnet system in terms of general structural factors like forces and moments on the support elements, displacements of the main components, deformation and stress in the coil casings, etc. In a second step local models with a detailed description of more critical WP zones are considered in order to analyze their internal components like conductor jackets, turn insulation, etc. This paper provides an overview of local analyses of several critical W7-X magnet system components with particular attention on the coil winding packs. (author)

  10. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  11. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  12. Soil analysis. Modern instrumental technique

    International Nuclear Information System (INIS)

    Smith, K.A.

    1993-01-01

    This book covers traditional methods of analysis and specialist monographs on individual instrumental techniques, which are usually not written with soil or plant analysis specifically in mind. The principles of the techniques are combined with discussions of sample preparation and matrix problems, and critical reviews of applications in soil science and related disciplines. Individual chapters are processed separately for inclusion in the appropriate data bases

  13. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  14. Probabilistic risk assessment course documentation. Volume 3. System reliability and analysis techniques, Session A - reliability

    International Nuclear Information System (INIS)

    Lofgren, E.V.

    1985-08-01

    This course in System Reliability and Analysis Techniques focuses on the quantitative estimation of reliability at the systems level. Various methods are reviewed, but the structure provided by the fault tree method is used as the basis for system reliability estimates. The principles of fault tree analysis are briefly reviewed. Contributors to system unreliability and unavailability are reviewed, models are given for quantitative evaluation, and the requirements for both generic and plant-specific data are discussed. Also covered are issues of quantifying component faults that relate to the systems context in which the components are embedded. All reliability terms are carefully defined. 44 figs., 22 tabs

  15. Condition monitoring with Mean field independent components analysis

    DEFF Research Database (Denmark)

    Pontoppidan, Niels Henrik; Sigurdsson, Sigurdur; Larsen, Jan

    2005-01-01

    We discuss condition monitoring based on mean field independent components analysis of acoustic emission energy signals. Within this framework it is possible to formulate a generative model that explains the sources, their mixing and also the noise statistics of the observed signals. By using...... a novelty approach we may detect unseen faulty signals as indeed faulty with high precision, even though the model learns only from normal signals. This is done by evaluating the likelihood that the model generated the signals and adapting a simple threshold for decision. Acoustic emission energy signals...... from a large diesel engine is used to demonstrate this approach. The results show that mean field independent components analysis gives a better detection of fault compared to principal components analysis, while at the same time selecting a more compact model...

  16. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  17. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  18. Use of spectroscopic techniques for the chemical analysis of biomorphic silicon carbide ceramics

    International Nuclear Information System (INIS)

    Pavon, J.M. Cano; Alonso, E. Vereda; Cordero, M.T. Siles; Torres, A. Garcia de; Lopez-Cepero, J.M.

    2005-01-01

    Biomorphic silicon carbide ceramics are a new class of materials prepared by several complex processing steps including pre-processing (shaping, drying, high-temperature pyrolysis in an inert atmosphere) and reaction with liquid silicon to obtain silicon-carbide. The results of industrial process of synthesis (measured by the SiC content) must be evaluated by means of fast analytical methods. In the present work, diverse samples of biomorphic ceramics derived from wood are studied for to evaluate the capability of the different analytical techniques (XPS, LIBS, FT-IR and also atomic spectroscopy applied to previously dissolved samples) for the analysis of these materials. XPS and LIBS gives information about the major components, whereas XPS and FT-IR can be used to evaluate the content of SiC. On the other hand, .the use of atomic techniques (as ICP-MS and ETA-AAS) is more adequate for the analysis of metal ions, specially at trace level. The properties of ceramics depend decisively of the content of chemical elements. Major components found were C, Si, Al, S, B and Na in all cases. Previous dissolution of the samples was optimised by acid attack in an oven under microwave irradiation

  19. Effects of interactive instructional techniques in a web-based peripheral nervous system component for human anatomy.

    Science.gov (United States)

    Allen, Edwin B; Walls, Richard T; Reilly, Frank D

    2008-02-01

    This study investigated the effects of interactive instructional techniques in a web-based peripheral nervous system (PNS) component of a first year medical school human anatomy course. Existing data from 9 years of instruction involving 856 students were used to determine (1) the effect of web-based interactive instructional techniques on written exam item performance and (2) differences between student opinions of the benefit level of five different types of interactive learning objects used. The interactive learning objects included Patient Case studies, review Games, Simulated Interactive Patients (SIP), Flashcards, and unit Quizzes. Exam item analysis scores were found to be significantly higher (p < 0.05) for students receiving the instructional treatment incorporating the web-based interactive learning objects than for students not receiving this treatment. Questionnaires using a five-point Likert scale were analysed to determine student opinion ratings of the interactive learning objects. Students reported favorably on the benefit level of all learning objects. Students rated the benefit level of the Simulated Interactive Patients (SIP) highest, and this rating was significantly higher (p < 0.05) than all other learning objects. This study suggests that web-based interactive instructional techniques improve student exam performance. Students indicated a strong acceptance of Simulated Interactive Patient learning objects.

  20. Laser/fluorescent dye flow visualization technique developed for system component thermal hydraulic studies

    International Nuclear Information System (INIS)

    Oras, J.J.; Kasza, K.E.

    1988-01-01

    A novel laser flow visualization technique is presented together with examples of its use in visualizing complex flow patterns and plans for its further development. This technique has been successfully used to study (1) the flow in a horizontal pipe subject to temperature transients, to view the formation and breakup of thermally stratified flow and to determine instantaneous velocity distributions in the same flow at various axial locations; (2) the discharge of a stratified pipe flow into a plenum exhibiting a periodic vortex pattern; and (3) the thermal-buoyancy-induced flow channeling on the shell side of a heat exchanger with glass tubes and shell. This application of the technique to heat exchangers is unique. The flow patterns deep within a large tube bundle can be studied under steady or transient conditions. This laser flow visualization technique constitutes a very powerful tool for studying single or multiphase flows in complex thermal system components

  1. Quantitative analysis of occluded gases in uranium dioxide pellets by the mass spectrometry technique

    International Nuclear Information System (INIS)

    Vega Bustillos, J.O.W.; Rodrigues, C.; Iyer, S.S.

    1981-05-01

    A quantitative analysis of different components of occluded gases except water in uranium dioxide pellets is attempted here. A high temperature vacuum extration system is employed for the liberation and the determination of total volume of the occluded gases. A mass spectrometric technique is employed for the qualitative and quantitative analysis of these gases. The UO 2 pellets are placed in a graphite crucible and are subjected to varing temperatures (1000 0 C - 1700 0 C). The liberated gases are dehydrated and transferred to a measuring unit consisting essentially of a Toepler pump and a McLeod gauge. In this system the total volume of the gases liberated at N. T. P. is determined with a sensitivity of 0.002 cm 3 /g of UO 2 . An aliquot of the liberated gas is introduced into a quadrupole mass spectrometer (VGA-100 Varian Corp.) for the determination of the different components of the gas. On the basis of the analysis suggestions are made for the possible sources of these gas components. (Author) [pt

  2. Comparison of cluster and principal component analysis techniques to derive dietary patterns in Irish adults.

    Science.gov (United States)

    Hearty, Aine P; Gibney, Michael J

    2009-02-01

    The aims of the present study were to examine and compare dietary patterns in adults using cluster and factor analyses and to examine the format of the dietary variables on the pattern solutions (i.e. expressed as grams/day (g/d) of each food group or as the percentage contribution to total energy intake). Food intake data were derived from the North/South Ireland Food Consumption Survey 1997-9, which was a randomised cross-sectional study of 7 d recorded food and nutrient intakes of a representative sample of 1379 Irish adults aged 18-64 years. Cluster analysis was performed using the k-means algorithm and principal component analysis (PCA) was used to extract dietary factors. Food data were reduced to thirty-three food groups. For cluster analysis, the most suitable format of the food-group variable was found to be the percentage contribution to energy intake, which produced six clusters: 'Traditional Irish'; 'Continental'; 'Unhealthy foods'; 'Light-meal foods & low-fat milk'; 'Healthy foods'; 'Wholemeal bread & desserts'. For PCA, food groups in the format of g/d were found to be the most suitable format, and this revealed four dietary patterns: 'Unhealthy foods & high alcohol'; 'Traditional Irish'; 'Healthy foods'; 'Sweet convenience foods & low alcohol'. In summary, cluster and PCA identified similar dietary patterns when presented with the same dataset. However, the two dietary pattern methods required a different format of the food-group variable, and the most appropriate format of the input variable should be considered in future studies.

  3. Application of functional analysis techniques to supervisory systems

    International Nuclear Information System (INIS)

    Lambert, Manuel; Riera, Bernard; Martel, Gregory

    1999-01-01

    The aim of this paper is to apply firstly two interesting functional analysis techniques for the design of supervisory systems for complex processes, and secondly to discuss the strength and the weaknesses of each of them. Two functional analysis techniques have been applied, SADT (Structured Analysis and Design Technique) and FAST (Functional Analysis System Technique) on a process, an example of a Water Supply Process Control (WSPC) system. These techniques allow a functional description of industrial processes. The paper briefly discusses the functions of a supervisory system and some advantages of the application of functional analysis for the design of a 'human' centered supervisory system. Then the basic principles of the two techniques applied on the WSPC system are presented. Finally, the different results obtained from the two techniques are discussed

  4. CATEGORICAL IMAGE COMPONENTS IN THE FORMING SYSTEM OF A MARKETING TECHNIQUES MANAGER’S IMAGE CULTURE

    OpenAIRE

    Anna Borisovna Cherednyakova

    2015-01-01

    Based on the understanding of the image culture formation of managers of marketing techniques, as a representative of the social and communication interaction of public structures, categorical apparatus of image culture with an emphasis on the etymology of the image, as an integral component of image culture was analyzed. Categorical components of the image are presented from the standpoint of image culture, as personal new formation, an integral part of the professional activity of the marke...

  5. Use of principal components analysis and three-dimensional atmospheric-transport models for reactor-consequence evaluation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Walton, J.J.; Alpert, D.J.; Johnson, J.D.

    1982-01-01

    This work explores the use of principal components analysis coupled to three-dimensional atmospheric transport and dispersion models for evaluating the environmental consequences of reactor accidents. This permits the inclusion of meteorological data from multiple sites and the effects of topography in the consequence evaluation; features not normally included in such analyses. The technique identifies prevailing regional wind patterns and their frequencies for use in the transport and dispersion calculations. Analysis of a hypothetical accident scenario involving a release of radioactivity from a reactor situated in a river valley indicated the technique is quite useful whenever recurring wind patterns exist, as is often the case in complex terrain situations. Considerable differences were revealed in a comparison with results obtained from a more conventional Gaussian plume model using only the reactor site meteorology and no topographic effects

  6. Aging techniques and qualified life for safety system components

    International Nuclear Information System (INIS)

    Weaver, W.W.

    1980-01-01

    Presently, the qualified life objective for Class IE safety system components in nuclear power plants is somewhat of a subjective engineering judgment. When the desired qualified life is ascertained, there are other choices that must be made (which may be influenced by the desired qualified life) such as selecting the aging procedure to use in the qualification process. Adding complexity to the situation is the fact that there are some limitations in aging techniques at the present time. This article presents (1) a discussion of the limitations in aging procedures, (2) the general philosophy of qualification, and (3) a proposed method for specifying a desired qualified life, which uses a probabilistic approach. The probabilistic approach proposed in item 3 can be applied to natural aging programs and eventually to accelerated aging once the present technical difficulties are overcome

  7. Data analysis of x-ray fluorescence holography by subtracting normal component from inverse hologram

    International Nuclear Information System (INIS)

    Happo, Naohisa; Hayashi, Kouichi; Hosokawa, Shinya

    2010-01-01

    X-ray fluorescence holography (XFH) is a powerful technique for determining three-dimensional local atomic arrangements around a specific fluorescing element. However, the raw experimental hologram is predominantly a mixed hologram, i.e., a mixture of hologram generated in both normal and inverse modes, which produces unreliable atomic images. In this paper, we propose a practical subtraction method of the normal component from the inverse XFH data by a Fourier transform for the calculated hologram of a model ZnTe cluster. Many spots originating from the normal components could be properly removed using a mask function, and clear atomic images were reconstructed at adequate positions of the model cluster. This method was successfully applied to the analysis of experimental ZnTe single crystal XFH data. (author)

  8. Variational Bayesian Learning for Wavelet Independent Component Analysis

    Science.gov (United States)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  9. Analysis of neutral volatile aroma components in Tilsit cheese using a combination of dynamic headspace technique, capillary gas chromatography and mass spectrometry

    International Nuclear Information System (INIS)

    Dillinger, K.H.

    2000-03-01

    Tilsit cheese is made by the influence of lab ferment and starter cultures on milk. The ripening is done by repeated inoculation of the surface of the Tilsit cheese with yeasts and read smear cultures. This surface flora forms the typical aroma of the Tilsit cheese during the ripening process. The aim of the work was to receive general knowledge about the kind and amount of the neutral volatile aroma components of Tilsit cheese. Beyond this the ability of forming aroma components by read smear cultures and the dispersion of these components in cheese was to be examined. The results were intended to evaluate the formation of aroma components in Tilsit cheese. The semi-quantitative analyses of the aroma components of all samples were done by combining dynamic headspace extraction, gas chromatography and mass spectrometry. In this process the neutral volatile aroma components were extracted by dynamic headspace technique, adsorbed on a trap, thermally desorbed, separated by gas chromatography, detected and identified by mass spectrometry. 63 components belonging to the chemical classes of esters, ketones, aldehydes, alcohols and sulfur containing substances as well as aromatic hydrocarbons, chlorinated hydrocarbons and hydrocarbons were found in the analysed cheese samples of different Austrian Tilsit manufacturing plants. All cheese samples showed a qualitative equal but quantitative varied spectrum of aroma components. The cultivation of pure cultures on a cheese agar medium showed all analysed aroma components to be involved in the biochemical metabolism of these cultures. The ability to produce aroma components greatly differed between the strains and it was not possible to correlate this ability with the taxonomic classification of the strains. The majority of the components had a non-homogeneous concentration profile in the cheese body. This was explained by effects of diffusion and temporal and spatial different forming of components by the metabolism of the

  10. Hospitals Productivity Measurement Using Data Envelopment Analysis Technique.

    Science.gov (United States)

    Torabipour, Amin; Najarzadeh, Maryam; Arab, Mohammad; Farzianpour, Freshteh; Ghasemzadeh, Roya

    2014-11-01

    This study aimed to measure the hospital productivity using data envelopment analysis (DEA) technique and Malmquist indices. This is a cross sectional study in which the panel data were used in a 4 year period from 2007 to 2010. The research was implemented in 12 teaching and non-teaching hospitals of Ahvaz County. Data envelopment analysis technique and the Malmquist indices with an input-orientation approach, was used to analyze the data and estimation of productivity. Data were analyzed using the SPSS.18 and DEAP.2 software. Six hospitals (50%) had a value lower than 1, which represents an increase in total productivity and other hospitals were non-productive. the average of total productivity factor (TPF) was 1.024 for all hospitals, which represents a decrease in efficiency by 2.4% from 2007 to 2010. The average technical, technologic, scale and managerial efficiency change was 0.989, 1.008, 1.028, and 0.996 respectively. There was not a significant difference in mean productivity changes among teaching and non-teaching hospitals (P>0.05) (except in 2009 years). Productivity rate of hospitals had an increasing trend generally. However, the total average of productivity was decreased in hospitals. Besides, between the several components of total productivity, variation of technological efficiency had the highest impact on reduce of total average of productivity.

  11. A content analysis of preconception health education materials: characteristics, strategies, and clinical-behavioral components.

    Science.gov (United States)

    Levis, Denise M; Westbrook, Kyresa

    2013-01-01

    Many health organizations and practitioners in the United States promote preconception health (PCH) to consumers. However, summaries and evaluations of PCH promotional activities are limited. We conducted a content analysis of PCH health education materials collected from local-, state-, national-, and federal-level partners by using an existing database of partners, outreach to maternal and child health organizations, and a snowball sampling technique. Not applicable. Not applicable. Thirty-two materials were included for analysis, based on inclusion/exclusion criteria. A codebook guided coding of materials' characteristics (type, authorship, language, cost), use of marketing and behavioral strategies to reach the target population (target audience, message framing, call to action), and inclusion of PCH subject matter (clinical-behavioral components). The self-assessment of PCH behaviors was the most common material (28%) to appear in the sample. Most materials broadly targeted women, and there was a near-equal distribution in targeting by pregnancy planning status segments (planners and nonplanners). "Practicing PCH benefits the baby's health" was the most common message frame used. Materials contained a wide range of clinical-behavioral components. Strategic targeting of subgroups of consumers is an important but overlooked strategy. More research is needed around PCH components, in terms of packaging and increasing motivation, which could guide use and placement of clinical-behavioral components within promotional materials.

  12. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  13. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  14. Effect of different drying techniques on bioactive components, fatty acid composition, and volatile profile of robusta coffee beans.

    Science.gov (United States)

    Dong, Wenjiang; Hu, Rongsuo; Chu, Zhong; Zhao, Jianping; Tan, Lehe

    2017-11-01

    This study investigated the effect of different drying techniques, namely, room-temperature drying (RTD), solar drying (SD), heat-pump drying (HPD), hot-air drying (HAD), and freeze drying (FD), on bioactive components, fatty acid composition, and the volatile compound profile of robusta coffee beans. The data showed that FD was an effective method to preserve fat, organic acids, and monounsaturated fatty acids. In contrast, HAD was ideal for retaining polyunsaturated fatty acids and amino acids. Sixty-two volatile compounds were identified in the differently dried coffee beans, representing 90% of the volatile compounds. HPD of the coffee beans produced the largest number of volatiles, whereas FD resulted in the highest volatile content. A principal component analysis demonstrated a close relationship between the HPD, SD, and RTD methods whereas the FD and HAD methods were significantly different. Overall, the results provide a basis for potential application to other similar thermal sensitive materials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Raman spectroscopy combined with principal component analysis and k nearest neighbour analysis for non-invasive detection of colon cancer

    Science.gov (United States)

    Li, Xiaozhou; Yang, Tianyue; Li, Siqi; Wang, Deli; Song, Youtao; Zhang, Su

    2016-03-01

    This paper attempts to investigate the feasibility of using Raman spectroscopy for the diagnosis of colon cancer. Serum taken from 75 healthy volunteers, 65 colon cancer patients and 60 post-operation colon cancer patients was measured in this experiment. In the Raman spectra of all three groups, the Raman peaks at 750, 1083, 1165, 1321, 1629 and 1779 cm-1 assigned to nucleic acids, amino acids and chromophores were consistently observed. All of these six Raman peaks were observed to have statistically significant differences between groups. For quantitative analysis, the multivariate statistical techniques of principal component analysis (PCA) and k nearest neighbour analysis (KNN) were utilized to develop diagnostic algorithms for classification. In PCA, several peaks in the principal component (PC) loadings spectra were identified as the major contributors to the PC scores. Some of the peaks in the PC loadings spectra were also reported as characteristic peaks for colon tissues, which implies correlation between peaks in PC loadings spectra and those in the original Raman spectra. KNN was also performed on the obtained PCs, and a diagnostic accuracy of 91.0% and a specificity of 92.6% were achieved.

  16. Raman spectroscopy combined with principal component analysis and k nearest neighbour analysis for non-invasive detection of colon cancer

    International Nuclear Information System (INIS)

    Li, Xiaozhou; Yang, Tianyue; Wang, Deli; Li, Siqi; Song, Youtao; Zhang, Su

    2016-01-01

    This paper attempts to investigate the feasibility of using Raman spectroscopy for the diagnosis of colon cancer. Serum taken from 75 healthy volunteers, 65 colon cancer patients and 60 post-operation colon cancer patients was measured in this experiment. In the Raman spectra of all three groups, the Raman peaks at 750, 1083, 1165, 1321, 1629 and 1779 cm −1 assigned to nucleic acids, amino acids and chromophores were consistently observed. All of these six Raman peaks were observed to have statistically significant differences between groups. For quantitative analysis, the multivariate statistical techniques of principal component analysis (PCA) and k nearest neighbour analysis (KNN) were utilized to develop diagnostic algorithms for classification. In PCA, several peaks in the principal component (PC) loadings spectra were identified as the major contributors to the PC scores. Some of the peaks in the PC loadings spectra were also reported as characteristic peaks for colon tissues, which implies correlation between peaks in PC loadings spectra and those in the original Raman spectra. KNN was also performed on the obtained PCs, and a diagnostic accuracy of 91.0% and a specificity of 92.6% were achieved. (paper)

  17. The GC/MS Analysis of Volatile Components Extracted by Different Methods from Exocarpium Citri Grandis

    Directory of Open Access Journals (Sweden)

    Zhisheng Xie

    2013-01-01

    Full Text Available Volatile components from Exocarpium Citri Grandis (ECG were, respectively, extracted by three methods, that is, steam distillation (SD, headspace solid-phase microextraction (HS-SPME, and solvent extraction (SE. A total of 81 compounds were identified by gas chromatography-mass spectrometry including 77 (SD, 56 (HS-SPME, and 48 (SE compounds, respectively. Despite of the extraction method, terpenes (39.98~57.81% were the main volatile components of ECG, mainly germacrene-D, limonene, 2,6,8,10,14-hexadecapentaene, 2,6,11,15-tetramethyl-, (E,E,E-, and trans-caryophyllene. Comparison was made among the three methods in terms of extraction profile and property. SD relatively gave an entire profile of volatile in ECG by long-time extraction; SE enabled the analysis of low volatility and high molecular weight compounds but lost some volatiles components; HS-SPME generated satisfactory extraction efficiency and gave similar results to those of SD at analytical level when consuming less sample amount, shorter extraction time, and simpler procedure. Although SD and SE were treated as traditionally preparative extractive techniques for volatiles in both small batches and large scale, HS-SPME coupled with GC/MS could be useful and appropriative for the rapid extraction and qualitative analysis of volatile components from medicinal plants at analytical level.

  18. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  19. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    Science.gov (United States)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources

  20. Analysis methods for structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Sievers, J.

    2004-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)

  1. The Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project

    Science.gov (United States)

    Barnes, D.; Harrison, R. A.; Davies, J. A.; Perry, C. H.; Moestl, C.; Rouillard, A.; Bothmer, V.; Rodriguez, L.; Eastwood, J. P.; Kilpua, E.; Gallagher, P.; Odstrcil, D.

    2017-12-01

    Understanding solar wind evolution is fundamental to advancing our knowledge of energy and mass transport in the solar system, whilst also being crucial to space weather and its prediction. The advent of truly wide-angle heliospheric imaging has revolutionised the study of solar wind evolution, by enabling direct and continuous observation of both transient and background components of the solar wind as they propagate from the Sun to 1 AU and beyond. The recently completed, EU-funded FP7 Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project (1st May 2014 - 30th April 2017) combined European expertise in heliospheric imaging, built up over the last decade in particular through leadership of the Heliospheric Imager (HI) instruments aboard NASA's STEREO mission, with expertise in solar and coronal imaging as well as the interpretation of in-situ and radio diagnostic measurements of solar wind phenomena. HELCATS involved: (1) the cataloguing of transient (coronal mass ejections) and background (stream/corotating interaction regions) solar wind structures observed by the STEREO/HI instruments, including estimates of their kinematic properties based on a variety of modelling techniques; (2) the verification of these kinematic properties through comparison with solar source observations and in-situ measurements at multiple points throughout the heliosphere; (3) the assessment of the potential for initialising numerical models based on the derived kinematic properties of transient and background solar wind components; and (4) the assessment of the complementarity of radio observations (Type II radio bursts and interplanetary scintillation) in the detection and analysis of heliospheric structure in combination with heliospheric imaging observations. In this presentation, we provide an overview of the HELCATS project emphasising, in particular, the principal achievements and legacy of this unprecedented project.

  2. System diagnostics using qualitative analysis and component functional classification

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures

  3. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  4. Analysis and analytical techniques

    Energy Technology Data Exchange (ETDEWEB)

    Batuecas Rodriguez, T [Department of Chemistry and Isotopes, Junta de Energia Nuclear, Madrid (Spain)

    1967-01-01

    The technology associated with the use of organic coolants in nuclear reactors depends to a large extent on the determination and control of their physical and chemical properties, and particularly on the viability, speed, sensitivity, precision and accuracy (depending on the intended usage) of the methods employed in detection and analytical determination. This has led to the study and development of numerous techniques, some specially designed for the extreme conditions involved in working with the types of product in question and others adapted from existing techniques. In the specific case of polyphenyl and hydropolyphenyl mixtures, which have been the principal subjects of study to date and offer greatest promise, the analytical problems are broadly as follows: Composition of initial product or virgin coolant composition of macro components and amounts of organic and inorganic impurities; Coolant during and after operation. Determination of gases and organic compounds produced by pyrolysis and radiolysis (degradation and polymerization products); Control of systems for purifying and regenerating the coolant after use. Dissolved pressurization gases; Detection of intermediate products during decomposition; these are generally very unstable (free radicals); Degree of fouling and film formation. Tests to determine potential formation of films; Corrosion of structural elements and canning materials; Health and safety. Toxicity, inflammability and impurities that can be activated. Although some of the above problems are closely interrelated and entail similar techniques, they vary as to degree of difficulty. Another question is the difficulty of distinguishing clearly between techniques for determining physical and physico-chemical properties, on one hand, and analytical techniques on the other. Any classification is therefore somewhat arbitrary (for example, in the case of dosimetry and techniques for determining mean molecular weights or electrical conductivity

  5. The new generations of power components will depend on neutron and/or electron bombardment techniques

    International Nuclear Information System (INIS)

    Lilen, H.

    1976-01-01

    Neutron and electron bombardment techniques for materials doping, newly introduced in the fabrication of power semiconductor components: diodes, transistors, thyristors, and triacs are briefly outlined. A neutron bombardment of high purity silicon results in a short-lived 31 Si isotope (from 30 Si) decaying into 31 P. The phosphorus with its five peripheral electrons induces a negative doping (N), and the neutron technique gives a homogeneous doping. Furthermore, silicon bombardment with 1 to 2MeV electrons induces micro-ruptures in the lattice, that act as recombination traps reducing carrier lifetimes. Consequently, gold diffusion techniques can be replaced by electron bombardment with a gain in controlling carrier lifetimes [fr

  6. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  7. On an efficient modification of singular value decomposition using independent component analysis for improved MRS denoising and quantification

    International Nuclear Information System (INIS)

    Stamatopoulos, V G; Karras, D A; Mertzios, B G

    2009-01-01

    An efficient modification of singular value decomposition (SVD) is proposed in this paper aiming at denoising and more importantly at quantifying more accurately the statistically independent spectra of metabolite sources in magnetic resonance spectroscopy (MRS). Although SVD is known in MRS applications and several efficient algorithms exist for estimating SVD summation terms in which the raw MRS data are analyzed, however, it would be more beneficial for such an analysis if techniques with the ability to estimate statistically independent spectra could be employed. SVD is known to separate signal and noise subspaces but it assumes orthogonal properties for the components comprising signal subspace, which is not always the case, and might impose heavy constraints for the MRS case. A much more relaxing constraint would be to assume statistically independent components. Therefore, a modification of the main methodology incorporating techniques for calculating the assumed statistically independent spectra is proposed by applying SVD on the MRS spectrogram through application of the short time Fourier transform (STFT). This approach is based on combining SVD on STFT spectrogram followed by an iterative application of independent component analysis (ICA). Moreover, it is shown that the proposed methodology combined with a regression analysis would lead to improved quantification of the MRS signals. An experimental study based on synthetic MRS signals has been conducted to evaluate the herein proposed methodologies. The results obtained have been discussed and it is shown to be quite promising

  8. MUSIC-CONTENT-ADAPTIVE ROBUST PRINCIPAL COMPONENT ANALYSIS FOR A SEMANTICALLY CONSISTENT SEPARATION OF FOREGROUND AND BACKGROUND IN MUSIC AUDIO SIGNALS

    OpenAIRE

    Papadopoulos , Hélène; Ellis , Daniel P.W.

    2014-01-01

    International audience; Robust Principal Component Analysis (RPCA) is a technique to decompose signals into sparse and low rank components, and has recently drawn the attention of the MIR field for the problem of separating leading vocals from accompaniment, with appealing re-sults obtained on small excerpts of music. However, the perfor-mance of the method drops when processing entire music tracks. We present an adaptive formulation of RPCA that incorporates music content information to guid...

  9. Principal Component Analysis of Body Measurements In Three ...

    African Journals Online (AJOL)

    This study was conducted to explore the relationship among body measurements in 3 strains of broilers chicken (Arbor Acre, Marshal and Ross) using principal component analysis with the view of identifying those components that define body conformation in broilers. A total of 180 birds were used, 60 per strain.

  10. Fault tree analysis with multistate components

    International Nuclear Information System (INIS)

    Caldarola, L.

    1979-02-01

    A general analytical theory has been developed which allows one to calculate the occurence probability of the top event of a fault tree with multistate (more than states) components. It is shown that, in order to correctly describe a system with multistate components, a special type of Boolean algebra is required. This is called 'Boolean algebra with restrictions on varibales' and its basic rules are the same as those of the traditional Boolean algebra with some additional restrictions on the variables. These restrictions are extensively discussed in the paper. Important features of the method are the identification of the complete base and of the smallest irredundant base of a Boolean function which does not necessarily need to be coherent. It is shown that the identification of the complete base of a Boolean function requires the application of some algorithms which are not used in today's computer programmes for fault tree analysis. The problem of statistical dependence among primary components is discussed. The paper includes a small demonstrative example to illustrate the method. The example includes also statistical dependent components. (orig.) [de

  11. Detection of cow's milk proteins and minor components in human milk using proteomics techniques.

    Science.gov (United States)

    Coscia, A; Orrù, S; Di Nicola, P; Giuliani, F; Varalda, A; Peila, C; Fabris, C; Conti, A; Bertino, E

    2012-10-01

    Cow's milk proteins (CMPs) are the best characterized food allergens. The aim of this study was to investigate cow's milk allergens in human colostrum of term and preterm newborns' mothers, and other minor protein components by proteomics techniques, more sensitive than other techniques used in the past. Sixty-two term and 11 preterm colostrum samples were collected, subjected to a treatment able to increase the concentration of the most diluted proteins and simultaneously to reduce the concentration of the proteins present at high concentration (Proteominer Treatment), and subsequently subjected to the steps of proteomic techniques. The most relevant finding in this study was the detection of the intact bovine alpha-S1-casein in human colostrum, then bovine alpha-1-casein could be considered the cow's milk allergen that is readily secreted in human milk and could be a cause of sensitization to cow's milk in exclusively breastfed predisposed infants. Another interesting result was the detection, at very low concentrations, of proteins previously not described in human milk (galectin-7, the different isoforms of the 14-3-3 protein and the serum amyloid P-component), probably involved in the regulation of the normal cell growth, in the pro-apoptotic function and in the regulation of tissue homeostasis. Further investigations are needed to understand if these families of proteins have specific biological activity in human milk.

  12. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  13. Determining the number of components in principal components analysis: A comparison of statistical, crossvalidation and approximated methods

    NARCIS (Netherlands)

    Saccenti, E.; Camacho, J.

    2015-01-01

    Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several

  14. Technique for radionuclide composition analysis of snow cover in the Chernobyl' NPP 30-km zone using fiber sorbents

    International Nuclear Information System (INIS)

    Kham'yanov, L.P.; Rau, D.F.; Amosov, M.M.; Strel'nikova, A.E.; Tereshchenko, V.I.; Veber, T.S.

    1989-01-01

    The high-sensitivity, simple and fast technique for analysis of large-dispersive and ionic components of snow cover radioactivity is suggested. It is based on separation of a sample by fractions, concentration of the dispersive fraction on mechanical filters and the dissolved one on ion-exchange sorbents and separated fraction spectrometry. The minimum measured contamination level is 3.7 Bq/dm 3 for each radionuclide analyzed. The conclusion is made that the technique suggested is the reliable method for radionuclide content analysis is snow cover samples of the Chernobyl' NPP zone. 1 tab

  15. BAYESIAN SEMI-BLIND COMPONENT SEPARATION FOR FOREGROUND REMOVAL IN INTERFEROMETRIC 21 cm OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Le; Timbie, Peter T. [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Bunn, Emory F. [Physics Department, University of Richmond, Richmond, VA 23173 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, P. M. [Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Wandelt, Benjamin D., E-mail: lzhang263@wisc.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 W Green Street, Urbana, IL 61801 (United States)

    2016-01-15

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.

  16. Registration of dynamic dopamine D{sub 2}receptor images using principal component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Acton, P.D.; Ell, P.J. [Institute of Nuclear Medicine, University College London Medical School, London (United Kingdom); Pilowsky, L.S.; Brammer, M.J. [Institute of Psychiatry, De Crespigny Park, London (United Kingdom); Suckling, J. [Clinical Age Research Unit, Kings College School of Medicine and Dentistry, London (United Kingdom)

    1997-11-01

    This paper describes a novel technique for registering a dynamic sequence of single-photon emission tomography (SPET) dopamine D{sub 2}receptor images, using principal component analysis (PCA). Conventional methods for registering images, such as count difference and correlation coefficient algorithms, fail to take into account the dynamic nature of the data, resulting in large systematic errors when registering time-varying images. However, by using principal component analysis to extract the temporal structure of the image sequence, misregistration can be quantified by examining the distribution of eigenvalues. The registration procedures were tested using a computer-generated dynamic phantom derived from a high-resolution magnetic resonance image of a realistic brain phantom. Each method was also applied to clinical SPET images of dopamine D {sub 2}receptors, using the ligands iodine-123 iodobenzamide and iodine-123 epidepride, to investigate the influence of misregistration on kinetic modelling parameters and the binding potential. The PCA technique gave highly significant (P <0.001) improvements in image registration, leading to alignment errors in x and y of about 25% of the alternative methods, with reductions in autocorrelations over time. It could also be applied to align image sequences which the other methods failed completely to register, particularly {sup 123}I-epidepride scans. The PCA method produced data of much greater quality for subsequent kinetic modelling, with an improvement of nearly 50% in the {chi}{sup 2}of the fit to the compartmental model, and provided superior quality registration of particularly difficult dynamic sequences. (orig.) With 4 figs., 2 tabs., 26 refs.

  17. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  18. Independent component analysis in non-hypothesis driven metabolomics

    DEFF Research Database (Denmark)

    Li, Xiang; Hansen, Jakob; Zhao, Xinjie

    2012-01-01

    In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...

  19. Infrared and millimeter waves v.15 millimeter components and techniques, pt.VI

    CERN Document Server

    Button, Kenneth J

    1986-01-01

    Infrared and Millimeter Waves, Volume 15: Millimeter Components and Techniques, Part VI is concerned with millimeter-wave guided propagation and integrated circuits. This book covers low-noise receiver technology for near-millimeter wavelengths; dielectric image-line antennas; EHF satellite communications (SATCOM) terminal antennas; and semiconductor antennas for millimeter-wave integrated circuits. A scanning airborne radiometer for 30 and 90 GHz and a self-oscillating mixer are also described. This monograph is comprised of six chapters and begins with a discussion on the design of low-n

  20. Chemical composition of Galla chinensis extract and the effect of its main component(s) on the prevention of enamel demineralization in vitro

    NARCIS (Netherlands)

    Huang, X.L.; Liu, M.D.; Li, J.Y.; Zhou, X.D.; ten Cate, J.M.

    2012-01-01

    To determine the chemical composition of Galla chinensis extract (GCE) by several analysis techniques and to compare the efficacy of GCE and its main component(s) in inhibition of enamel demineralization, for the development of future anticaries agents, main organic composition of GCE was

  1. Determination of the elemental distribution in cigarette components and smoke by instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Wu, D.; Landsberger, S.; Larson, S.M.

    1997-01-01

    Cigarette smoking is a major source of particle released in indoor environments. A comprehensive study of the elemental distribution in cigarettes and cigarette smoke has been completed. Specifically, concentrations of thirty elements have been determined for the components of 15 types of cigarettes. Components include tobacco, ash, butts, filters, and cigarette paper. In addition, particulate matter from mainstream smoke (MS) and sidesstream smoke (SS) were analyzed. The technique of elemental determination used in the study is instrumental neutron activation analysis. The results show that certain heavy metals, such as As, Cd, K, Sb and Zn, are released into the MS and SS. These metals may then be part of the health risk of exposure to smoke. Other elements are retained, for the most part, in cigarette ash and butts. The elemental distribution among the cigarette components and smoke changes for different smoking conditions. (author)

  2. Analysis of contaminants on electronic components by reflectance FTIR spectroscopy

    International Nuclear Information System (INIS)

    Griffith, G.W.

    1982-09-01

    The analysis of electronic component contaminants by infrared spectroscopy is often a difficult process. Most of the contaminants are very small, which necessitates the use of microsampling techniques. Beam condensers will provide the required sensitivity but most require that the sample be removed from the substrate before analysis. Since it can be difficult and time consuming, it is usually an undesirable approach. Micro ATR work can also be exasperating, due to the difficulty of positioning the sample at the correct place under the ATR plate in order to record a spectrum. This paper describes a modified reflection beam condensor which has been adapted to a Nicolet 7199 FTIR. The sample beam is directed onto the sample surface and reflected from the substrate back to the detector. A micropositioning XYZ stage and a close-focusing telescope are used to position the contaminant directly under the infrared beam. It is possible to analyze contaminants on 1 mm wide leads surrounded by an epoxy matrix using this device. Typical spectra of contaminants found on small circuit boards are included

  3. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  4. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    Science.gov (United States)

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  5. Group-wise Principal Component Analysis for Exploratory Data Analysis

    NARCIS (Netherlands)

    Camacho, J.; Rodriquez-Gomez, Rafael A.; Saccenti, E.

    2017-01-01

    In this paper, we propose a new framework for matrix factorization based on Principal Component Analysis (PCA) where sparsity is imposed. The structure to impose sparsity is defined in terms of groups of correlated variables found in correlation matrices or maps. The framework is based on three new

  6. Bulk analysis using nuclear techniques

    International Nuclear Information System (INIS)

    Borsaru, M.; Holmes, R.J.; Mathew, P.J.

    1983-01-01

    Bulk analysis techniques developed for the mining industry are reviewed. Using penetrating neutron and #betta#-radiations, measurements are obtained directly from a large volume of sample (3-30 kg) #betta#-techniques were used to determine the grade of iron ore and to detect shale on conveyor belts. Thermal neutron irradiation was developed for the simultaneous determination of iron and aluminium in iron ore on a conveyor belt. Thermal-neutron activation analysis includes the determination of alumina in bauxite, and manganese and alumina in manganese ore. Fast neutron activation analysis is used to determine silicon in iron ores, and alumina and silica in bauxite. Fast and thermal neutron activation has been used to determine the soil in shredded sugar cane. (U.K.)

  7. Real Time Engineering Analysis Based on a Generative Component Implementation

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Klitgaard, Jens

    2007-01-01

    The present paper outlines the idea of a conceptual design tool with real time engineering analysis which can be used in the early conceptual design phase. The tool is based on a parametric approach using Generative Components with embedded structural analysis. Each of these components uses the g...

  8. Nuclear fuel cycle cost analysis using a probabilistic simulation technique

    International Nuclear Information System (INIS)

    Won, Il Ko; Jong, Won Choi; Chul, Hyung Kang; Jae, Sol Lee; Kun, Jai Lee

    1998-01-01

    A simple approach was described to incorporate the Monte Carlo simulation technique into a fuel cycle cost estimate. As a case study, the once-through and recycle fuel cycle options were tested with some alternatives (ie. the change of distribution type for input parameters), and the simulation results were compared with the values calculated by a deterministic method. A three-estimate approach was used for converting cost inputs into the statistical parameters of assumed probabilistic distributions. It was indicated that the Monte Carlo simulation by a Latin Hypercube Sampling technique and subsequent sensitivity analyses were useful for examining uncertainty propagation of fuel cycle costs, and could more efficiently provide information to decisions makers than a deterministic method. It was shown from the change of distribution types of input parameters that the values calculated by the deterministic method were set around a 40 th ∼ 50 th percentile of the output distribution function calculated by probabilistic simulation. Assuming lognormal distribution of inputs, however, the values calculated by the deterministic method were set around an 85 th percentile of the output distribution function calculated by probabilistic simulation. It was also indicated from the results of the sensitivity analysis that the front-end components were generally more sensitive than the back-end components, of which the uranium purchase cost was the most important factor of all. It showed, also, that the discount rate made many contributions to the fuel cycle cost, showing the rank of third or fifth of all components. The results of this study could be useful in applications to another options, such as the Dcp (Direct Use of PWR spent fuel In Candu reactors) cycle with high cost uncertainty

  9. Quantitative mineralogical analysis of sandstones using x-ray diffraction techniques

    International Nuclear Information System (INIS)

    Ward, C.R.; Taylor, J.C.

    1999-01-01

    Full text: X-ray diffraction has long been used as a definitive technique for mineral identification based on the measuring the internal atomic or crystal structures present in powdered rocks; soils and other mineral mixtures. Recent developments in data gathering and processing, however, have provided an improved basis for its use as a quantitative tool, determining not only the nature of the minerals but also the relative proportions of the different minerals present. The mineralogy of a series of sandstone samples from the Sydney and Bowen Basins of eastern Australia has been evaluated by X-ray diffraction (XRD) on a quantitative basis using the Australian-developed SIROQUANT data processing technique. Based on Rietveld principles, this technique generates a synthetic X-ray diffractogram by adjusting and combining full-profile patterns of minerals nominated as being present in the sample and interactively matches the synthetic diffractogram under operator instructions to the observed diffractogram of the sample being analysed. The individual mineral patterns may be refined in the process, to allow for variations in crystal structure of individual components or for factors such as preferred orientation in the sample mount. The resulting output provides mass percentages of the different minerals in the mixture, and an estimate of the error associated with each individual percentage determination. The chemical composition of the mineral mixtures indicated by SIROQUANT for each individual sandstone studied was estimated using a spreadsheet routine, and the indicated proportion of each oxide in each sample compared to the actual chemical analysis of the same sandstone as determined independently by X-ray fluorescence spectrometry. The results show a high level of agreement for all major chemical constituents, indicating consistency between the SIROQUANT XRD data and the whole-rock chemical composition. Supplementary testing with a synthetic corundum spike further

  10. SNIa detection in the SNLS photometric analysis using Morphological Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Möller, A.; Ruhlmann-Kleider, V.; Neveu, J.; Palanque-Delabrouille, N. [Irfu, SPP, CEA Saclay, F-91191 Gif sur Yvette cedex (France); Lanusse, F.; Starck, J.-L., E-mail: anais.moller@cea.fr, E-mail: vanina.ruhlmann-kleider@cea.fr, E-mail: francois.lanusse@cea.fr, E-mail: jeremy.neveu@cea.fr, E-mail: nathalie.palanque-delabrouille@cea.fr, E-mail: jstarck@cea.fr [Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, SAp, CEA Saclay, F-91191 Gif sur Yvette cedex (France)

    2015-04-01

    Detection of supernovae (SNe) and, more generally, of transient events in large surveys can provide numerous false detections. In the case of a deferred processing of survey images, this implies reconstructing complete light curves for all detections, requiring sizable processing time and resources. Optimizing the detection of transient events is thus an important issue for both present and future surveys. We present here the optimization done in the SuperNova Legacy Survey (SNLS) for the 5-year data deferred photometric analysis. In this analysis, detections are derived from stacks of subtracted images with one stack per lunation. The 3-year analysis provided 300,000 detections dominated by signals of bright objects that were not perfectly subtracted. Allowing these artifacts to be detected leads not only to a waste of resources but also to possible signal coordinate contamination. We developed a subtracted image stack treatment to reduce the number of non SN-like events using morphological component analysis. This technique exploits the morphological diversity of objects to be detected to extract the signal of interest. At the level of our subtraction stacks, SN-like events are rather circular objects while most spurious detections exhibit different shapes. A two-step procedure was necessary to have a proper evaluation of the noise in the subtracted image stacks and thus a reliable signal extraction. We also set up a new detection strategy to obtain coordinates with good resolution for the extracted signal. SNIa Monte-Carlo (MC) generated images were used to study detection efficiency and coordinate resolution. When tested on SNLS 3-year data this procedure decreases the number of detections by a factor of two, while losing only 10% of SN-like events, almost all faint ones. MC results show that SNIa detection efficiency is equivalent to that of the original method for bright events, while the coordinate resolution is improved.

  11. Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray Burst ...

    Indian Academy of Sciences (India)

    Principal Component Analysis of Long-Lag, Wide-Pulse Gamma-Ray. Burst Data. Zhao-Yang Peng. ∗. & Wen-Shuai Liu. Department of Physics, Yunnan Normal University, Kunming 650500, China. ∗ e-mail: pzy@ynao.ac.cn. Abstract. We have carried out a Principal Component Analysis (PCA) of the temporal and spectral ...

  12. Nuclear analysis techniques and environmental sciences

    International Nuclear Information System (INIS)

    1997-10-01

    31 theses are collected in this book. It introduced molecular activation analysis micro-PIXE and micro-probe analysis, x-ray fluorescence analysis and accelerator mass spectrometry. The applications about these nuclear analysis techniques are presented and reviewed for environmental sciences

  13. Statistical evaluation of vibration analysis techniques

    Science.gov (United States)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  14. Single shot fringe pattern phase demodulation using Hilbert-Huang transform aided by the principal component analysis.

    Science.gov (United States)

    Trusiak, Maciej; Służewski, Łukasz; Patorski, Krzysztof

    2016-02-22

    Hybrid single shot algorithm for accurate phase demodulation of complex fringe patterns is proposed. It employs empirical mode decomposition based adaptive fringe pattern enhancement (i.e., denoising, background removal and amplitude normalization) and subsequent boosted phase demodulation using 2D Hilbert spiral transform aided by the Principal Component Analysis method for novel, correct and accurate local fringe direction map calculation. Robustness to fringe pattern significant noise, uneven background and amplitude modulation as well as local fringe period and shape variations is corroborated by numerical simulations and experiments. Proposed automatic, adaptive, fast and comprehensive fringe analysis solution compares favorably with other previously reported techniques.

  15. Biochemical component identification by plasmonic improved whispering gallery mode optical resonance based sensor

    Science.gov (United States)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Saetchnikov, Anton V.; Schweiger, Gustav; Ostendorf, Andreas

    2014-05-01

    Experimental data on detection and identification of variety of biochemical agents, such as proteins, microelements, antibiotic of different generation etc. in both single and multi component solutions under varied in wide range concentration analyzed on the light scattering parameters of whispering gallery mode optical resonance based sensor are represented. Multiplexing on parameters and components has been realized using developed fluidic sensor cell with fixed in adhesive layer dielectric microspheres and data processing. Biochemical component identification has been performed by developed network analysis techniques. Developed approach is demonstrated to be applicable both for single agent and for multi component biochemical analysis. Novel technique based on optical resonance on microring structures, plasmon resonance and identification tools has been developed. To improve a sensitivity of microring structures microspheres fixed by adhesive had been treated previously by gold nanoparticle solution. Another technique used thin film gold layers deposited on the substrate below adhesive. Both biomolecule and nanoparticle injections caused considerable changes of optical resonance spectra. Plasmonic gold layers under optimized thickness also improve parameters of optical resonance spectra. Biochemical component identification has been also performed by developed network analysis techniques both for single and for multi component solution. So advantages of plasmon enhancing optical microcavity resonance with multiparameter identification tools is used for development of a new platform for ultra sensitive label-free biomedical sensor.

  16. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  17. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  18. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Directory of Open Access Journals (Sweden)

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  19. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  20. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  1. RFI Detection and Mitigation using Independent Component Analysis as a Pre-Processor

    Science.gov (United States)

    Schoenwald, Adam J.; Gholian, Armen; Bradley, Damon C.; Wong, Mark; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.

    2016-01-01

    Radio-frequency interference (RFI) has negatively impacted scientific measurements of passive remote sensing satellites. This has been observed in the L-band radiometers Soil Moisture and Ocean Salinity (SMOS), Aquarius and more recently, Soil Moisture Active Passive (SMAP). RFI has also been observed at higher frequencies such as K band. Improvements in technology have allowed wider bandwidth digital back ends for passive microwave radiometry. A complex signal kurtosis radio frequency interference detector was developed to help identify corrupted measurements. This work explores the use of Independent Component Analysis (ICA) as a blind source separation (BSS) technique to pre-process radiometric signals for use with the previously developed real and complex signal kurtosis detectors.

  2. Application of Principal Component Analysis in Prompt Gamma Spectra for Material Sorting

    Energy Technology Data Exchange (ETDEWEB)

    Im, Hee Jung; Lee, Yun Hee; Song, Byoung Chul; Park, Yong Joon; Kim, Won Ho

    2006-11-15

    For the detection of illicit materials in a very short time by comparing unknown samples' gamma spectra to pre-programmed material signatures, we at first, selected a method to reduce the noise of the obtained gamma spectra. After a noise reduction, a pattern recognition technique was applied to discriminate the illicit materials from the innocuous materials in the noise reduced data. Principal component analysis was applied for a noise reduction and pattern recognition in prompt gamma spectra. A computer program for the detection of illicit materials based on PCA method was developed in our lab and can be applied to the PGNAA system for the baggage checking at all ports of entry at a very short time.

  3. Optical design and analysis of carbon dioxide laser fusion systems using interferometry and fast Fourier transform techniques

    International Nuclear Information System (INIS)

    Viswanathan, V.K.

    1979-01-01

    The optical design and analysis of the LASL carbon dioxide laser fusion systems required the use of techniques that are quite different from the currently used method in conventional optical design problems. The necessity for this is explored and the method that has been successfully used at Los Alamos to understand these systems is discussed with examples. This method involves characterization of the various optical components in their mounts by a Zernike polynomial set and using fast Fourier transform techniques to propagate the beam, taking diffraction and other nonlinear effects that occur in these types of systems into account. The various programs used for analysis are briefly discussed

  4. Flow analysis techniques for phosphorus: an overview.

    Science.gov (United States)

    Estela, José Manuel; Cerdà, Víctor

    2005-04-15

    A bibliographical review on the implementation and the results obtained in the use of different flow analytical techniques for the determination of phosphorus is carried out. The sources, occurrence and importance of phosphorus together with several aspects regarding the analysis and terminology used in the determination of this element are briefly described. A classification as well as a brief description of the basis, advantages and disadvantages of the different existing flow techniques, namely; segmented flow analysis (SFA), flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA) and multipumped FIA (MPFIA) is also carried out. The most relevant manuscripts regarding the analysis of phosphorus by means of flow techniques are herein classified according to the detection instrumental technique used with the aim to facilitate their study and obtain an overall scope. Finally, the analytical characteristics of numerous flow-methods reported in the literature are provided in the form of a table and their applicability to samples with different matrixes, namely water samples (marine, river, estuarine, waste, industrial, drinking, etc.), soils leachates, plant leaves, toothpaste, detergents, foodstuffs (wine, orange juice, milk), biological samples, sugars, fertilizer, hydroponic solutions, soils extracts and cyanobacterial biofilms are tabulated.

  5. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    Directory of Open Access Journals (Sweden)

    Joseph P. Kenny

    2008-01-01

    Full Text Available Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also address interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.

  6. BUSINESS PROCESS MANAGEMENT SYSTEMS TECHNOLOGY COMPONENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Andrea Giovanni Spelta

    2007-05-01

    Full Text Available The information technology that supports the implementation of the business process management appproach is called Business Process Management System (BPMS. The main components of the BPMS solution framework are process definition repository, process instances repository, transaction manager, conectors framework, process engine and middleware. In this paper we define and characterize the role and importance of the components of BPMS's framework. The research method adopted was the case study, through the analysis of the implementation of the BPMS solution in an insurance company called Chubb do Brasil. In the case study, the process "Manage Coinsured Events"" is described and characterized, as well as the components of the BPMS solution adopted and implemented by Chubb do Brasil for managing this process.

  7. Independent Component Analysis and Time-Frequency Masking for Speech Recognition in Multitalker Conditions

    Directory of Open Access Journals (Sweden)

    Reinhold Orglmeister

    2010-01-01

    Full Text Available When a number of speakers are simultaneously active, for example in meetings or noisy public places, the sources of interest need to be separated from interfering speakers and from each other in order to be robustly recognized. Independent component analysis (ICA has proven a valuable tool for this purpose. However, ICA outputs can still contain strong residual components of the interfering speakers whenever noise or reverberation is high. In such cases, nonlinear postprocessing can be applied to the ICA outputs, for the purpose of reducing remaining interferences. In order to improve robustness to the artefacts and loss of information caused by this process, recognition can be greatly enhanced by considering the processed speech feature vector as a random variable with time-varying uncertainty, rather than as deterministic. The aim of this paper is to show the potential to improve recognition of multiple overlapping speech signals through nonlinear postprocessing together with uncertainty-based decoding techniques.

  8. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  9. TV content analysis techniques and applications

    CERN Document Server

    Kompatsiaris, Yiannis

    2012-01-01

    The rapid advancement of digital multimedia technologies has not only revolutionized the production and distribution of audiovisual content, but also created the need to efficiently analyze TV programs to enable applications for content managers and consumers. Leaving no stone unturned, TV Content Analysis: Techniques and Applications provides a detailed exploration of TV program analysis techniques. Leading researchers and academics from around the world supply scientifically sound treatment of recent developments across the related subject areas--including systems, architectures, algorithms,

  10. Functional Generalized Structured Component Analysis.

    Science.gov (United States)

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  11. Complex Signal Kurtosis and Independent Component Analysis for Wideband Radio Frequency Interference Detection

    Science.gov (United States)

    Schoenwald, Adam; Mohammed, Priscilla; Bradley, Damon; Piepmeier, Jeffrey; Wong, Englin; Gholian, Armen

    2016-01-01

    Radio-frequency interference (RFI) has negatively implicated scientific measurements across a wide variation passive remote sensing satellites. This has been observed in the L-band radiometers SMOS, Aquarius and more recently, SMAP [1, 2]. RFI has also been observed at higher frequencies such as K band [3]. Improvements in technology have allowed wider bandwidth digital back ends for passive microwave radiometry. A complex signal kurtosis radio frequency interference detector was developed to help identify corrupted measurements [4]. This work explores the use of ICA (Independent Component Analysis) as a blind source separation technique to pre-process radiometric signals for use with the previously developed real and complex signal kurtosis detectors.

  12. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA wastewater data

    Directory of Open Access Journals (Sweden)

    Stefania Salvatore

    2016-07-01

    Full Text Available Abstract Background Wastewater-based epidemiology (WBE is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA and to wavelet principal component analysis (WPCA which is more flexible temporally. Methods We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. Results The first three principal components (PCs, functional principal components (FPCs and wavelet principal components (WPCs explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. Conclusion FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  13. Femoral component revision with use of impaction bone-grafting and a cemented polished stem. Surgical technique.

    NARCIS (Netherlands)

    Schreurs, B.W.; Arts, J.J.C.; Verdonschot, N.J.J.; Buma, P.; Slooff, T.J.J.H.; Gardeniers, J.W.M.

    2006-01-01

    BACKGROUND: The purpose of this study was to evaluate the clinical and radiographic outcomes of revision of the femoral component of a hip arthroplasty with use of an impaction bone-grafting technique and a cemented polished stem. METHODS: Thirty-three consecutive femoral reconstructions that were

  14. Integration of independent component analysis with near-infrared spectroscopy for analysis of bioactive components in the medicinal plant Gentiana scabra Bunge

    Directory of Open Access Journals (Sweden)

    Yung-Kun Chuang

    2014-09-01

    Full Text Available Independent component (IC analysis was applied to near-infrared spectroscopy for analysis of gentiopicroside and swertiamarin; the two bioactive components of Gentiana scabra Bunge. ICs that are highly correlated with the two bioactive components were selected for the analysis of tissue cultures, shoots and roots, which were found to distribute in three different positions within the domain [two-dimensional (2D and 3D] constructed by the ICs. This setup could be used for quantitative determination of respective contents of gentiopicroside and swertiamarin within the plants. For gentiopicroside, the spectral calibration model based on the second derivative spectra produced the best effect in the wavelength ranges of 600–700 nm, 1600–1700 nm, and 2000–2300 nm (correlation coefficient of calibration = 0.847, standard error of calibration = 0.865%, and standard error of validation = 0.909%. For swertiamarin, a spectral calibration model based on the first derivative spectra produced the best effect in the wavelength ranges of 600–800 nm and 2200–2300 nm (correlation coefficient of calibration = 0.948, standard error of calibration = 0.168%, and standard error of validation = 0.216%. Both models showed a satisfactory predictability. This study successfully established qualitative and quantitative correlations for gentiopicroside and swertiamarin with near-infrared spectra, enabling rapid and accurate inspection on the bioactive components of G. scabra Bunge at different growth stages.

  15. Passive Microwave Components and Antennas

    DEFF Research Database (Denmark)

    State-of-the-art microwave systems always require higher performance and lower cost microwave components. Constantly growing demands and performance requirements of industrial and scientific applications often make employing traditionally designed components impractical. For that reason, the design...... and development process remains a great challenge today. This problem motivated intensive research efforts in microwave design and technology, which is responsible for a great number of recently appeared alternative approaches to analysis and design of microwave components and antennas. This book highlights...... techniques. Modelling and computations in electromagnetics is a quite fast-growing research area. The recent interest in this field is caused by the increased demand for designing complex microwave components, modeling electromagnetic materials, and rapid increase in computational power for calculation...

  16. PHOTOGRAMMETRIC TECHNIQUES FOR ROAD SURFACE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. A. Knyaz

    2016-06-01

    Full Text Available The quality and condition of a road surface is of great importance for convenience and safety of driving. So the investigations of the behaviour of road materials in laboratory conditions and monitoring of existing roads are widely fulfilled for controlling a geometric parameters and detecting defects in the road surface. Photogrammetry as accurate non-contact measuring method provides powerful means for solving different tasks in road surface reconstruction and analysis. The range of dimensions concerned in road surface analysis can have great variation from tenths of millimetre to hundreds meters and more. So a set of techniques is needed to meet all requirements of road parameters estimation. Two photogrammetric techniques for road surface analysis are presented: for accurate measuring of road pavement and for road surface reconstruction based on imagery obtained from unmanned aerial vehicle. The first technique uses photogrammetric system based on structured light for fast and accurate surface 3D reconstruction and it allows analysing the characteristics of road texture and monitoring the pavement behaviour. The second technique provides dense 3D model road suitable for road macro parameters estimation.

  17. Eliminating the Influence of Harmonic Components in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. These forces are seen as harmonic components in the responses, and their influence should be eliminated before extracting the modes in their vicinity. This paper describes a new method based...... on the well-known Enhanced Frequency Domain Decomposition (EFDD) technique for eliminating these harmonic components in the modal parameter extraction process. For assessing the quality of the method, various experiments were carried out where the results were compared with those obtained with pure stochastic...

  18. Multi-spectrometer calibration transfer based on independent component analysis.

    Science.gov (United States)

    Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong

    2018-02-26

    Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.

  19. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  20. Effectiveness of thermoluminescence analysis to detect low quantity of gamma-irradiated component in non-irradiated mushroom powders

    International Nuclear Information System (INIS)

    Akram, Kashif; Ahn, Jae-Jun; Shahbaz, Hafiz Muhammad; Jo, Deokjo; Kwon, Joong-Ho

    2013-01-01

    Gamma-irradiated (0–10 kGy) dried mushrooms (Lentinus edodes) powders were mixed at different ratios (1–10%) in the non-irradiated samples and investigated using photostimulated-luminescence (PSL), electron spin resonance (ESR) and thermoluminescence (TL) techniques. The PSL results were negative for all samples at 1% mixing ratio, whereas intermediate results were observed for the samples containing 5% or 10% irradiated component with the exception (positive) of 10% mixing of 10 kGy-irradiated sample. The ESR analysis showed the presence of crystalline sugar radicals in the irradiated samples but the radiation-specific spectral features were absent in the mixed samples. TL analysis showed the radiation-specific TL glow curves; however, the complicated results were observed at 1% mixing of 2 and 5 kGy-irradiated samples, which required careful evaluations to draw the final conclusion about the irradiation status of the samples. TL ratios could only confirm the results of samples with 5% and 10% mixing of 10 kGy, and 10% mixing of 5 kGy-irradiated components. SEM-EDX analysis showed that feldspar and quartz were major contaminating minerals, responsible for the radiation-specific luminescence characteristics. -- Highlights: ► Detection of irradiated food is important to enforce the applied regulations. ► The effectiveness of TL analysis was investigated to detect irradiated component. ► The TL results were compared with those from PSL and ESR analysis. ► TL analysis was most effective to characterize the irradiation status of samples. ► SEM-EDX analysis showed feldspar and quartz as the main source of TL properties

  1. Effectiveness of thermoluminescence analysis to detect low quantity of gamma-irradiated component in non-irradiated mushroom powders

    Energy Technology Data Exchange (ETDEWEB)

    Akram, Kashif [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Institute of Food Science and Nutrition, University of Sargodha, Sargodha 40100 (Pakistan); Ahn, Jae-Jun [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Shahbaz, Hafiz Muhammad [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Institute of Food Science and Nutrition, University of Sargodha, Sargodha 40100 (Pakistan); Jo, Deokjo [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Kwon, Joong-Ho, E-mail: jhkwon@knu.ac.kr [School of Food Science and Biotechnology, Kyungpook National University, Daegu 702-701 (Korea, Republic of)

    2013-04-15

    Gamma-irradiated (0–10 kGy) dried mushrooms (Lentinus edodes) powders were mixed at different ratios (1–10%) in the non-irradiated samples and investigated using photostimulated-luminescence (PSL), electron spin resonance (ESR) and thermoluminescence (TL) techniques. The PSL results were negative for all samples at 1% mixing ratio, whereas intermediate results were observed for the samples containing 5% or 10% irradiated component with the exception (positive) of 10% mixing of 10 kGy-irradiated sample. The ESR analysis showed the presence of crystalline sugar radicals in the irradiated samples but the radiation-specific spectral features were absent in the mixed samples. TL analysis showed the radiation-specific TL glow curves; however, the complicated results were observed at 1% mixing of 2 and 5 kGy-irradiated samples, which required careful evaluations to draw the final conclusion about the irradiation status of the samples. TL ratios could only confirm the results of samples with 5% and 10% mixing of 10 kGy, and 10% mixing of 5 kGy-irradiated components. SEM-EDX analysis showed that feldspar and quartz were major contaminating minerals, responsible for the radiation-specific luminescence characteristics. -- Highlights: ► Detection of irradiated food is important to enforce the applied regulations. ► The effectiveness of TL analysis was investigated to detect irradiated component. ► The TL results were compared with those from PSL and ESR analysis. ► TL analysis was most effective to characterize the irradiation status of samples. ► SEM-EDX analysis showed feldspar and quartz as the main source of TL properties.

  2. Simulation of electromagnetic aggression on components

    International Nuclear Information System (INIS)

    Adoun, G.; Coumar, O.

    1997-01-01

    Numerical simulation techniques can be used for studying the behaviour of electronic components exposed to electromagnetic aggression. This article discusses CW analysis on a CMOS-technology logic NAND gate under electromagnetic aggression of different amplitudes (2.5 V 5 V), frequencies (100 MHz and 1 GHz) and phase. Numerical simulations were conducted using three codes: Spice code was used for solving electronic circuits, and the Atlas and Dessis codes were used for examining internal component behaviour. (authors)

  3. Independent principal component analysis for simulation of soil water content and bulk density in a Canadian Watershed

    Directory of Open Access Journals (Sweden)

    Alaba Boluwade

    2016-09-01

    Full Text Available Accurate characterization of soil properties such as soil water content (SWC and bulk density (BD is vital for hydrologic processes and thus, it is importance to estimate θ (water content and ρ (soil bulk density among other soil surface parameters involved in water retention and infiltration, runoff generation and water erosion, etc. The spatial estimation of these soil properties are important in guiding agricultural management decisions. These soil properties vary both in space and time and are correlated. Therefore, it is important to find an efficient and robust technique to simulate spatially correlated variables. Methods such as principal component analysis (PCA and independent component analysis (ICA can be used for the joint simulations of spatially correlated variables, but they are not without their flaws. This study applied a variant of PCA called independent principal component analysis (IPCA that combines the strengths of both PCA and ICA for spatial simulation of SWC and BD using the soil data set from an 11 km2 Castor watershed in southern Quebec, Canada. Diagnostic checks using the histograms and cumulative distribution function (cdf both raw and back transformed simulations show good agreement. Therefore, the results from this study has potential in characterization of water content variability and bulk density variation for precision agriculture.

  4. Time-series-analysis techniques applied to nuclear-material accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-05-01

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  5. Characterizing functional connectivity during rest in multiple sclerosis patients versus healthy volunteers using independent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Palacio Garcia, L.; Andrzejak, R.; Prchkovska, V.; Rodrigues, P.

    2016-07-01

    It is commonly thought that our brain is not active when it does not receive any external input. However, during rest, there are still certain distant regions of the brain that are functionally correlated between them: the so-called resting-state networks. This functional connectivity of the brain is disrupted in many neurological diseases. In particular, it has been shown that one of the most studied resting-state networks (the default-mode network) is affected in multiple sclerosis, which is the most common disabling neurological condition affecting the central nervous system of young adults. In this work, I focus on the study of the differences in the resting-state networks between multiple sclerosis patients and healthy volunteers. In order to study the effects of multiple sclerosis on the functional connectivity of the brain, a numerical method known as independent component analysis (ICA) is applied. This technique divides the resting-state fMRI data into independent components. Nonetheless, noise, which could be due to head motion or physiological artifacts, may corrupt the data by indicating a false activation. Therefore, I create a web user interface that allows the user to manually classify all the independent components for a given subject. Eventually, the components classified as noise should be removed from the functional data in order to prevent them from taking part in any further analysis. (Author)

  6. A Principal Component Analysis of Project Management Construction Industry Competencies for the Ghanaian

    Directory of Open Access Journals (Sweden)

    Rockson Dobgegah

    2011-03-01

    Full Text Available The study adopts a data reduction technique to examine the presence of any complex structure among a set of project management competency variables. A structured survey questionnaire was administered to 100 project managers to elicit relevant data, and this achieved a relatively high response rate of 54%. After satisfying all the necessary tests of reliability of the survey instrument, sample size adequacy and population matrix, the data was subjected to principal component analysis, resulting in the identification of six new thematic project management competency areas ; and were explained in terms of human resource management and project control; construction innovation and communication; project financial resources management; project risk and quality management; business ethics and; physical resources and procurement management. These knowledge areas now form the basis for lateral project management training requirements in the context of the Ghanaian construction industry. Key contribution of the paper is manifested in the use of the principal component analysis, which has rigorously provided understanding into the complex structure and the relationship between the various knowledge areas. The originality and value of the paper is embedded in the use of contextual-task conceptual knowledge to expound the six uncorrelated empirical utility of the project management competencies.

  7. Experimental modal analysis of components of the LHC experiments

    CERN Document Server

    Guinchard, M; Catinaccio, A; Kershaw, K; Onnela, A

    2007-01-01

    Experimental modal analysis of components of the LHC experiments is performed with the purpose of determining their fundamental frequencies, their damping and the mode shapes of light and fragile detector components. This process permits to confirm or replace Finite Element analysis in the case of complex structures (with cables and substructure coupling). It helps solving structural mechanical problems to improve the operational stability and determine the acceleration specifications for transport operations. This paper describes the hardware and software equipment used to perform a modal analysis on particular structures such as a particle detector and the method of curve fitting to extract the results of the measurements. This paper exposes also the main results obtained for the LHC Experiments.

  8. Interpretation of risk significance of passive component aging using probabilistic structural analysis

    International Nuclear Information System (INIS)

    Phillips, J.H.; Atwood, C.L.

    1993-01-01

    The probabilistic risk assessments (PRAs) being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. Except as initiating events, the possible failure of passive components is given little consideration. The NRC is sponsoring a project at INEL to investigate the risk significance of passive components as they age. For this project, we developed a technique to calculate the failure probability of passive components over time, and demonstrated the technique by applying it to a weld in the auxiliary feedwater (AFW) system. A decreasing yearly rupture rate for this weld was calculated instead of the increasing rupture rate trend one might expect. We attribute this result to infant mortality; that is, most of those initial flaws that will eventually lead to rupture will do so early in life. This means that although each weld in a population may be wearing out, the population as a whole can exhibit a decreasing rupture rate. This observation has implications for passive components in commercial nuclear plants and other facilities where aging is a concern. For the population of passive components that exhibit a decreasing failure rate, risk increase is not a concern. The next step of the work is to identify the attributes that contribute to this decreasing rate, and to determine any attributes that would contribute to an increasing failure rate and thus to an increased risk

  9. Principal Components Analysis on the spectral Bidirectional Reflectance Distribution Function of ceramic colour standards.

    Science.gov (United States)

    Ferrero, A; Campos, J; Rabal, A M; Pons, A; Hernanz, M L; Corróns, A

    2011-09-26

    The Bidirectional Reflectance Distribution Function (BRDF) is essential to characterize an object's reflectance properties. This function depends both on the various illumination-observation geometries as well as on the wavelength. As a result, the comprehensive interpretation of the data becomes rather complex. In this work we assess the use of the multivariable analysis technique of Principal Components Analysis (PCA) applied to the experimental BRDF data of a ceramic colour standard. It will be shown that the result may be linked to the various reflection processes occurring on the surface, assuming that the incoming spectral distribution is affected by each one of these processes in a specific manner. Moreover, this procedure facilitates the task of interpolating a series of BRDF measurements obtained for a particular sample. © 2011 Optical Society of America

  10. Prestudy - Development of trend analysis of component failure

    International Nuclear Information System (INIS)

    Poern, K.

    1995-04-01

    The Bayesian trend analysis model that has been used for the computation of initiating event intensities (I-book) is based on the number of events that have occurred during consecutive time intervals. The model itself is a Poisson process with time-dependent intensity. For the analysis of aging it is often more relevant to use times between failures for a given component as input, where by 'time' is meant a quantity that best characterizes the age of the component (calendar time, operating time, number of activations etc). Therefore, it has been considered necessary to extend the model and the computer code to allow trend analysis of times between events, and also of several sequences of times between events. This report describes this model extension as well as an application on an introductory ageing analysis of centrifugal pumps defined in Table 5 of the T-book. The application in turn directs the attention to the need for further development of both the trend model and the data base. Figs

  11. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  12. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  13. Crack growth determination on laboratory components

    International Nuclear Information System (INIS)

    Hurst, R.C.

    1993-01-01

    In order to aid design and support remanent life assessment of plant components operating at elevated temperatures, the reliability of the analytical methods, which translate materials data procured from the laboratory to the behaviour of actual components, requires validation. Such a validation can of course be interpreted from operating plant, however the potential risks involved encourage the development of out of plant techniques for the validation of representative components. For meaningful validation, these techniques need careful control and high accuracy which can best be achieved in a laboratory environment. As the laboratory component test should be designed to simulate actual plant conditions as closely as possible, the direct extension of the results to the plant component case requires scaling up. Consequently the successful development of such a test may even lead to the advantageous situation where it could form an alternative to the conventional route where, for example, it may not be possible to obtain the plant component's metallurgical structure in a conventional specimen or, alternatively, when too many assumptions are required in the analysis when translating to different geometries and stress systems. Under these conditions, in spite of the more sophisticated test requirements, it may prove more reasonable to opt for the more representative laboratory component data for use in design or lifetime prediction. The present work describes the application of the component validation test philosophy to the problem of crack growth under two rather different loading conditions. In both cases, crack growth is measured using the direct current potential drop (PD) technique on tubular metallic components containing artificial defects, however the plant conditions to be simulated lead to either creep or thermal fatigue. The creep studies on Alloy 800H support heat exchanger design for nuclear plant, solar towers and chemical plant, whereas the work on the

  14. Eddy current technique for detecting and sizing surface cracks in steel components

    International Nuclear Information System (INIS)

    Cecco, V.S.; Carter, J.R.; Sullivan, S.P.

    1995-01-01

    Cracking has occurred in pressure vessel nozzles and girth welds due to thermal fatigue. Pipe welds, welds in support structures, and welds in reactor vault liner panels in nuclear facilities have failed because of cracks. Cracking can also occur in turbine rotor bore surfaces due to high cycle fatigue. Dye penetrant, magnetic particle and other surface NDT methods are used to detect cracks but cannot be used for depth sizing. Crack depth can be measured with various NDT methods such as ultrasonic time-of-flight diffraction (TOFD), potential drop, and eddy current. The TOFD technique can be difficult to implement on nozzle welds and is best suited for sizing deep cracks (>5 mm). The conventional eddy current method is easy to implement, but crack sizing is normally limited to shallow cracks ( 2 mm) cracks. Eddy current testing (ET) techniques are readily amenable to remote/automatic inspections. These new probes could augment present magnetic particle (MT) and dye penetrant (PT) testing through provision of reliable defect depth information. Reliable crack sizing permits identification of critical cracks for plant life extension and licensing purposes. In addition, performing PT and MT generates low level radioactive waste in some inspection applications in nuclear facilities. Replacing these techniques with ET for some components will eliminate some of this radioactive waste. (author)

  15. Fatigue Reliability Analysis of Wind Turbine Cast Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Fæster, Søren

    2017-01-01

    .) and to quantify the relevant uncertainties using available fatigue tests. Illustrative results are presented as obtained by statistical analysis of a large set of fatigue data for casted test components typically used for wind turbines. Furthermore, the SN curves (fatigue life curves based on applied stress......The fatigue life of wind turbine cast components, such as the main shaft in a drivetrain, is generally determined by defects from the casting process. These defects may reduce the fatigue life and they are generally distributed randomly in components. The foundries, cutting facilities and test...... facilities can affect the verification of properties by testing. Hence, it is important to have a tool to identify which foundry, cutting and/or test facility produces components which, based on the relevant uncertainties, have the largest expected fatigue life or, alternatively, have the largest reliability...

  16. Development of Electronic Nose and Near Infrared Spectroscopy Analysis Techniques to Monitor the Critical Time in SSF Process of Feed Protein

    Directory of Open Access Journals (Sweden)

    Hui Jiang

    2014-10-01

    Full Text Available In order to assure the consistency of the final product quality, a fast and effective process monitoring is a growing need in solid state fermentation (SSF industry. This work investigated the potential of non-invasive techniques combined with the chemometrics method, to monitor time-related changes that occur during SSF process of feed protein. Four fermentation trials conducted were monitored by an electronic nose device and a near infrared spectroscopy (NIRS spectrometer. Firstly, principal component analysis (PCA and independent component analysis (ICA were respectively applied to the feature extraction and information fusion. Then, the BP_AdaBoost algorithm was used to develop the fused model for monitoring of the critical time in SSF process of feed protein. Experimental results showed that the identified results of the fusion model are much better than those of the single technique model both in the training and validation sets, and the complexity of the fusion model was also less than that of the single technique model. The overall results demonstrate that it has a high potential in online monitoring of the critical moment in SSF process by use of integrating electronic nose and NIRS techniques, and data fusion from multi-technique could significantly improve the monitoring performance of SSF process.

  17. Comparative Study of Time-Frequency Decomposition Techniques for Fault Detection in Induction Motors Using Vibration Analysis during Startup Transient

    Directory of Open Access Journals (Sweden)

    Paulo Antonio Delgado-Arredondo

    2015-01-01

    Full Text Available Induction motors are critical components for most industries and the condition monitoring has become necessary to detect faults. There are several techniques for fault diagnosis of induction motors and analyzing the startup transient vibration signals is not as widely used as other techniques like motor current signature analysis. Vibration analysis gives a fault diagnosis focused on the location of spectral components associated with faults. Therefore, this paper presents a comparative study of different time-frequency analysis methodologies that can be used for detecting faults in induction motors analyzing vibration signals during the startup transient. The studied methodologies are the time-frequency distribution of Gabor (TFDG, the time-frequency Morlet scalogram (TFMS, multiple signal classification (MUSIC, and fast Fourier transform (FFT. The analyzed vibration signals are one broken rotor bar, two broken bars, unbalance, and bearing defects. The obtained results have shown the feasibility of detecting faults in induction motors using the time-frequency spectral analysis applied to vibration signals, and the proposed methodology is applicable when it does not have current signals and only has vibration signals. Also, the methodology has applications in motors that are not fed directly to the supply line, in such cases the analysis of current signals is not recommended due to poor current signal quality.

  18. Root cause analysis in support of reliability enhancement of engineering components

    International Nuclear Information System (INIS)

    Kumar, Sachin; Mishra, Vivek; Joshi, N.S.; Varde, P.V.

    2014-01-01

    Reliability based methods have been widely used for the safety assessment of plant system, structures and components. These methods provide a quantitative estimation of system reliability but do not give insight into the failure mechanism. Understanding the failure mechanism is a must to avoid the recurrence of the events and enhancement of the system reliability. Root cause analysis provides a tool for gaining detailed insights into the causes of failure of component with particular attention to the identification of fault in component design, operation, surveillance, maintenance, training, procedures and policies which must be improved to prevent repetition of incidents. Root cause analysis also helps in developing Probabilistic Safety Analysis models. A probabilistic precursor study provides a complement to the root cause analysis approach in event analysis by focusing on how an event might have developed adversely. This paper discusses the root cause analysis methodologies and their application in the specific case studies for enhancement of system reliability. (author)

  19. Recent developments in building diagnosis techniques

    CERN Document Server

    2016-01-01

    This book presents a collection of recent research on building diagnosis techniques related to construction pathology, hygrothermal behavior and durability, and diagnostic techniques. It highlights recent advances and new developments in the field of building physics, building anomalies in materials and components, new techniques for improved energy efficiency analysis, and diagnosis techniques such as infrared thermography. This book will be of interest to a wide readership of professionals, scientists, students, practitioners, and lecturers.

  20. Separation techniques: Chromatography

    Science.gov (United States)

    Coskun, Ozlem

    2016-01-01

    Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorption, partition, and size exclusion. Other chromatography techniques are based on the stationary bed, including column, thin layer, and paper chromatography. Column chromatography is one of the most common methods of protein purification. PMID:28058406

  1. A technique of including the effect of aging of passive components in probabilistic risk assessments

    International Nuclear Information System (INIS)

    Phillips, J.H.; Weidenhamer, G.H.

    1992-01-01

    The probabilistic risk assessments (PRAS) being developed at most nuclear power plants to calculate the risk of core damage generally focus on the possible failure of active components. The possible failure of passive components is given little consideration. We are developing methods for selecting risk-significant passive components and including them in PRAS. These methods provide effective ways to prioritize passive components for inspection, and where inspection reveals aging damage, mitigation or repair can be employed to reduce the likelihood of component failure. We demonstrated a method by selecting a weld in the auxiliary feedwater (AFW) system, basing our selection on expert judgement of the likelihood of failure and on an estimate of the consequence of component failure to plant safety. We then modified and used the Piping Reliability Analysis Including Seismic Events (PRAISE) computer code to perform a probabilistic structural analysis to calculate the probability that crack growth due to aging would cause the weld to fail. The PRAISE code was modified to include the effects of changing design material properties with age and changing stress cycles. The calculation included the effects of mechanical loads and thermal transients typical of the service loads for this piping design and the effects of thermal cycling caused by a leaking check valve. However, this particular calculation showed little change in low component failure probability and plant risk for 48 years of service. However, sensitivity studies showed that if the probability of component failure is high, the effect on plant risk is significant. The success of this demonstration shows that this method could be applied to nuclear power plants. The demonstration showed the method is too involved (PRAISE takes a long time to perform the calculation and the input information is extensive) for handling a large number of passive components and therefore simpler methods are needed

  2. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  3. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  4. Principal Component Analysis In Radar Polarimetry

    Directory of Open Access Journals (Sweden)

    A. Danklmayer

    2005-01-01

    Full Text Available Second order moments of multivariate (often Gaussian joint probability density functions can be described by the covariance or normalised correlation matrices or by the Kennaugh matrix (Kronecker matrix. In Radar Polarimetry the application of the covariance matrix is known as target decomposition theory, which is a special application of the extremely versatile Principle Component Analysis (PCA. The basic idea of PCA is to convert a data set, consisting of correlated random variables into a new set of uncorrelated variables and order the new variables according to the value of their variances. It is important to stress that uncorrelatedness does not necessarily mean independent which is used in the much stronger concept of Independent Component Analysis (ICA. Both concepts agree for multivariate Gaussian distribution functions, representing the most random and least structured distribution. In this contribution, we propose a new approach in applying the concept of PCA to Radar Polarimetry. Therefore, new uncorrelated random variables will be introduced by means of linear transformations with well determined loading coefficients. This in turn, will allow the decomposition of the original random backscattering target variables into three point targets with new random uncorrelated variables whose variances agree with the eigenvalues of the covariance matrix. This allows a new interpretation of existing decomposition theorems.

  5. Change detection of medical images using dictionary learning techniques and principal component analysis.

    Science.gov (United States)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  6. Statistically based uncertainty analysis for ranking of component importance in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Wilson, G.E.

    1992-01-01

    The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented

  7. Using Enhanced Frequency Domain Decomposition as a Robust Technique to Harmonic Excitation in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2006-01-01

    The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...

  8. REGULARIZED FUNCTIONAL PRINCIPAL COMPONENT ANALYSIS AND AN APPLICATION ON THE SHARE PRICES OF THE COMPANIES BELONGING TO THE ISE-30 INDEX

    Directory of Open Access Journals (Sweden)

    İSTEM KÖYMEN KESER

    2013-06-01

    Full Text Available The objective of the Functional Data Analysis techniques is to study such type of data which consist of observed functions or curves evaluated at a finite subset of some real interval.   Techniques in Functional Data Analysis can be used to study the variation in a random sample of real functions, xi(t, i=1, 2, …, N and their derivatives. In practice, these functions are often a consequence of a preliminary smoothing process applied to discrete data and in this work, Spline Smoothing Methods are used.  As the number of functions and the number of observation points increases, it would be difficult to handle the functions  altogether. In order to overcome this complexity, we utilize Functional and Regularized Functional Principal Component Analyses where a high percentage of total variation  could be accounted for with only a few component functions.  Finally, an application on the daily closing data for the share prices of the companies belonging to the ISE-30 index is also given.

  9. Gold analysis by the gamma absorption technique

    International Nuclear Information System (INIS)

    Kurtoglu, Arzu; Tugrul, A.B.

    2003-01-01

    Gold (Au) analyses are generally performed using destructive techniques. In this study, the Gamma Absorption Technique has been employed for gold analysis. A series of different gold alloys of known gold content were analysed and a calibration curve was obtained. This curve was then used for the analysis of unknown samples. Gold analyses can be made non-destructively, easily and quickly by the gamma absorption technique. The mass attenuation coefficients of the alloys were measured around the K-shell absorption edge of Au. Theoretical mass attenuation coefficient values were obtained using the WinXCom program and comparison of the experimental results with the theoretical values showed generally good and acceptable agreement

  10. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    Science.gov (United States)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  11. Software component quality evaluation

    Science.gov (United States)

    Clough, A. J.

    1991-01-01

    The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.

  12. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  13. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  14. Development of underwater laser cladding and underwater laser seal welding techniques for reactor components

    International Nuclear Information System (INIS)

    Hino, Takehisa; Tamura, Masataka; Tanaka, Yoshimi; Kouno, Wataru; Makino, Yoshinobu; Kawano, Shohei; Matsunaga, Keiji

    2009-01-01

    Stress corrosion cracking (SCC) has been reported at the aged components in many nuclear power plants. Toshiba has been developing the underwater laser welding. This welding technique can be conducted without draining the water in the reactor vessel. It is beneficial for workers not to exposure the radiation. The welding speed can be attaining twice as fast as that of Gas Tungsten Arc Welding (GTAW). The susceptibility of SCC can also be lower than the Alloy 600 base metal. (author)

  15. Combined approach based on principal component analysis and canonical discriminant analysis for investigating hyperspectral plant response

    Directory of Open Access Journals (Sweden)

    Anna Maria Stellacci

    2012-07-01

    Full Text Available Hyperspectral (HS data represents an extremely powerful means for rapidly detecting crop stress and then aiding in the rational management of natural resources in agriculture. However, large volume of data poses a challenge for data processing and extracting crucial information. Multivariate statistical techniques can play a key role in the analysis of HS data, as they may allow to both eliminate redundant information and identify synthetic indices which maximize differences among levels of stress. In this paper we propose an integrated approach, based on the combined use of Principal Component Analysis (PCA and Canonical Discriminant Analysis (CDA, to investigate HS plant response and discriminate plant status. The approach was preliminary evaluated on a data set collected on durum wheat plants grown under different nitrogen (N stress levels. Hyperspectral measurements were performed at anthesis through a high resolution field spectroradiometer, ASD FieldSpec HandHeld, covering the 325-1075 nm region. Reflectance data were first restricted to the interval 510-1000 nm and then divided into five bands of the electromagnetic spectrum [green: 510-580 nm; yellow: 581-630 nm; red: 631-690 nm; red-edge: 705-770 nm; near-infrared (NIR: 771-1000 nm]. PCA was applied to each spectral interval. CDA was performed on the extracted components to identify the factors maximizing the differences among plants fertilised with increasing N rates. Within the intervals of green, yellow and red only the first principal component (PC had an eigenvalue greater than 1 and explained more than 95% of total variance; within the ranges of red-edge and NIR, the first two PCs had an eigenvalue higher than 1. Two canonical variables explained cumulatively more than 81% of total variance and the first was able to discriminate wheat plants differently fertilised, as confirmed also by the significant correlation with aboveground biomass and grain yield parameters. The combined

  16. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  17. Quantitative determination of the intensities of known components in spectra obtained from surface analytical techniques

    International Nuclear Information System (INIS)

    Nelson, G.C.

    1984-01-01

    Linear least-squares methods have been used to quantitatively decompose experimental data obtained from surface analytical techniques into its separate components. The mathematical procedure for accomplishing this is described and examples are given of the use of this method with data obtained from Auger electron spectroscopy [both N(E) and derivative], x-ray photoelectron spectroscopy, and low energy ion scattering spectroscopy. The requirements on the quality of the data are discussed

  18. Analysis and interpretation of dynamic FDG PET oncological studies using data reduction techniques

    Directory of Open Access Journals (Sweden)

    Santos Andres

    2007-10-01

    Full Text Available Abstract Background Dynamic positron emission tomography studies produce a large amount of image data, from which clinically useful parametric information can be extracted using tracer kinetic methods. Data reduction methods can facilitate the initial interpretation and visual analysis of these large image sequences and at the same time can preserve important information and allow for basic feature characterization. Methods We have applied principal component analysis to provide high-contrast parametric image sets of lower dimensions than the original data set separating structures based on their kinetic characteristics. Our method has the potential to constitute an alternative quantification method, independent of any kinetic model, and is particularly useful when the retrieval of the arterial input function is complicated. In independent component analysis images, structures that have different kinetic characteristics are assigned opposite values, and are readily discriminated. Furthermore, novel similarity mapping techniques are proposed, which can summarize in a single image the temporal properties of the entire image sequence according to a reference region. Results Using our new cubed sum coefficient similarity measure, we have shown that structures with similar time activity curves can be identified, thus facilitating the detection of lesions that are not easily discriminated using the conventional method employing standardized uptake values.

  19. All-phase MR angiography using independent component analysis of dynamic contrast enhanced MRI time series. φ-MRA

    International Nuclear Information System (INIS)

    Suzuki, Kiyotaka; Matsuzawa, Hitoshi; Watanabe, Masaki; Nakada, Tsutomu; Nakayama, Naoki; Kwee, I.L.

    2003-01-01

    Dynamic contrast enhanced magnetic resonance imaging (dynamic MRI) represents a MRI version of non-diffusible tracer methods, the main clinical use of which is the physiological construction of what is conventionally referred to as perfusion images. The raw data utilized for constructing MRI perfusion images are time series of pixel signal alterations associated with the passage of a gadolinium containing contrast agent. Such time series are highly compatible with independent component analysis (ICA), a novel statistical signal processing technique capable of effectively separating a single mixture of multiple signals into their original independent source signals (blind separation). Accordingly, we applied ICA to dynamic MRI time series. The technique was found to be powerful, allowing for hitherto unobtainable assessment of regional cerebral hemodynamics in vivo. (author)

  20. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    Science.gov (United States)

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  1. A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter

    DEFF Research Database (Denmark)

    Salonitis, Konstantinos; Chantzis, Dimitrios; Kappatos, Vasileios

    2017-01-01

    approaches or with the use of topology optimization methodologies. An optimization approach utilizing multipurpose optimization algorithms has not been proposed yet. This paper presents a novel user-friendly method for the design optimization of lattice components towards weight minimization, which combines...... finite element analysis and evolutionary computation. The proposed method utilizes the cell homogenization technique in order to reduce the computational cost of the finite element analysis and a genetic algorithm in order to search for the most lightweight lattice configuration. A bracket consisting...

  2. Applications of neutron activation analysis technique

    International Nuclear Information System (INIS)

    Jonah, S. A.

    2000-07-01

    The technique was developed as far back as 1936 by G. Hevesy and H. Levy for the analysis of Dy using an isotopic source. Approximately 40 elements can be analyzed by instrumental neutron activation analysis (INNA) technique with neutrons from a nuclear reactor. By applying radiochemical separation, the number of elements that can be analysed may be increased to almost 70. Compared with other analytical methods used in environmental and industrial research, NAA has some unique features. These are multi-element capability, rapidity, reproducibility of results, complementarity to other methods, freedom from analytical blank and independency of chemical state of elements. There are several types of neutron sources namely: nuclear reactors, accelerator-based and radioisotope-based sources, but nuclear reactors with high fluxes of neutrons from the fission of 235 U give the most intense irradiation, and hence the highest available sensitivities for NAA. In this paper, the applications of NAA of socio-economic importance are discussed. The benefits of using NAA and related nuclear techniques for on-line applications in industrial process control are highlighted. A brief description of the NAA set-ups at CERT is enumerated. Finally, NAA is compared with other leading analytical techniques

  3. Principal Component Clustering Approach to Teaching Quality Discriminant Analysis

    Science.gov (United States)

    Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan

    2016-01-01

    Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…

  4. Elemental analysis techniques using proton microbeam

    International Nuclear Information System (INIS)

    Sakai, Takuro; Oikawa, Masakazu; Sato, Takahiro

    2005-01-01

    Proton microbeam is a powerful tool for two-dimensional elemental analysis. The analysis is based on Particle Induced X-ray Emission (PIXE) and Particle Induced Gamma-ray Emission (PIGE) techniques. The paper outlines the principles and instruments, and describes the dental application has been done in JAERI Takasaki. (author)

  5. Development of phased mission analysis program with Monte Carlo method. Improvement of the variance reduction technique with biasing towards top event

    International Nuclear Information System (INIS)

    Yang Jinan; Mihara, Takatsugu

    1998-12-01

    This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)

  6. Signal extraction and wave field separation in tunnel seismic prediction by independent component analysis

    Science.gov (United States)

    Yue, Y.; Jiang, T.; Zhou, Q.

    2017-12-01

    In order to ensure the rationality and the safety of tunnel excavation, the advanced geological prediction has been become an indispensable step in tunneling. However, the extraction of signal and the separation of P and S waves directly influence the accuracy of geological prediction. Generally, the raw data collected in TSP system is low quality because of the numerous disturb factors in tunnel projects, such as the power interference and machine vibration interference. It's difficult for traditional method (band-pass filtering) to remove interference effectively as well as bring little loss to signal. The power interference, machine vibration interference and the signal are original variables and x, y, z component as observation signals, each component of the representation is a linear combination of the original variables, which satisfy applicable conditions of independent component analysis (ICA). We perform finite-difference simulations of elastic wave propagation to synthetic a tunnel seismic reflection record. The method of ICA was adopted to process the three-component data, and the results show that extract the estimates of signal and the signals are highly correlated (the coefficient correlation is up to more than 0.93). In addition, the estimates of interference that separated from ICA and the interference signals are also highly correlated, and the coefficient correlation is up to more than 0.99. Thus, simulation results showed that the ICA is an ideal method for extracting high quality data from mixed signals. For the separation of P and S waves, the conventional separation techniques are based on physical characteristics of wave propagation, which require knowledge of the near-surface P and S waves velocities and density. Whereas the ICA approach is entirely based on statistical differences between P and S waves, and the statistical technique does not require a priori information. The concrete results of the wave field separation will be presented in

  7. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  8. Comparative analysis of the aroma chemicals of Melissa officinalis using hydrodistillation and HS-SPME techniques

    Directory of Open Access Journals (Sweden)

    Shakeel-u- Rehman

    2017-05-01

    Full Text Available Headspace solid-phase micro extraction (HS-SPME coupled with gas chromatography–mass spectrometry (GC–MS has been used for the chemical analysis of Melissa officinalis (leaves cultivated in Institute Germplasm. The HS-SPME analysis led to the identification of 22 components constituting 99.1% of the total volatile constituents present in the leaves whereas its hydrodistillate led to the identification of 24 volatile constituents constituting 98.1% of the volatile material. The chemical composition of the SPME and hydrodistilled extract of M. officinalis leaves comprised mainly of oxygenated monoterpenes (78.5% and 57.8% respectively and sesquiterpene hydrocarbons (14.9% and 29.7% respectively. The major components identified in the HS-SPME extract were citronellal (31.1%, citronellol (18.3%, β-caryophyllene (12.0%, (E-citral (11.9%, (Z-citral (9.6%, geraniol (3.6%, (Z-β-ocimene (3.1% and 1-octen-3-ol (2.0% whereas hydrodistilled essential oil was rich in (Z-citral (19.6%, β-caryophyllene (13.2%, (E-citral (11.2%, citronellal (10.2%, germacrene-d (8.3%, δ-3-carene (5.0%, 6-methyl-5-hepten-2-one (3.7% and citronellyl acetate (3.7%. The comparative analysis of volatile constituents of M. officinalis leaf extract using HS-SPME and hydrodistillation techniques shows both qualitative as well as quantitative differences. The current study is the first report involving rapid analysis of volatile components of M. officinalis by HS-SPME.

  9. Analysis of diffusivity of the oscillating reaction components in a microreactor system

    Directory of Open Access Journals (Sweden)

    Martina Šafranko

    2017-01-01

    Full Text Available When performing oscillating reactions, periodical changes in the concentrations of reactants, intermediaries, and products take place. Due to the mentioned periodical changes of the concentrations, the information about the diffusivity of the components included into oscillating reactions is very important for the control of the oscillating reactions. Non-linear dynamics makes oscillating reactions very interesting for analysis in different reactor systems. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor, with the aim of identifying the limiting component. The geometry of the microreactor microchannel and a well defined flow profile ensure optimal conditions for the diffusion phenomena analysis, because diffusion profiles in a microreactor depend only on the residence time. In this paper, the analysis of diffusivity of the oscillating reaction components was performed in a microreactor equipped with 2 Y-shape inlets and 2 Y-shape outlets, with active volume of V = 4 μL at different residence times.

  10. The composition-explicit distillation curve technique: Relating chemical analysis and physical properties of complex fluids.

    Science.gov (United States)

    Bruno, Thomas J; Ott, Lisa S; Lovestead, Tara M; Huber, Marcia L

    2010-04-16

    The analysis of complex fluids such as crude oils, fuels, vegetable oils and mixed waste streams poses significant challenges arising primarily from the multiplicity of components, the different properties of the components (polarity, polarizability, etc.) and matrix properties. We have recently introduced an analytical strategy that simplifies many of these analyses, and provides the added potential of linking compositional information with physical property information. This aspect can be used to facilitate equation of state development for the complex fluids. In addition to chemical characterization, the approach provides the ability to calculate thermodynamic properties for such complex heterogeneous streams. The technique is based on the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. The analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. By far, the most widely used analytical technique we have used with the ADC is gas chromatography. This has enabled us to study finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this special issue of the Journal of Chromatography, specifically dedicated to extraction technologies, we describe the essential features of the advanced distillation curve metrology as an analytical strategy for complex fluids. Published by Elsevier B.V.

  11. Development of evaluation method for software safety analysis techniques

    International Nuclear Information System (INIS)

    Huang, H.; Tu, W.; Shih, C.; Chen, C.; Yang, W.; Yih, S.; Kuo, C.; Chen, M.

    2006-01-01

    Full text: Full text: Following the massive adoption of digital Instrumentation and Control (I and C) system for nuclear power plant (NPP), various Software Safety Analysis (SSA) techniques are used to evaluate the NPP safety for adopting appropriate digital I and C system, and then to reduce risk to acceptable level. However, each technique has its specific advantage and disadvantage. If the two or more techniques can be complementarily incorporated, the SSA combination would be more acceptable. As a result, if proper evaluation criteria are available, the analyst can then choose appropriate technique combination to perform analysis on the basis of resources. This research evaluated the applicable software safety analysis techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flowgraph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/ noise ratio, complexity, and implementation cost. These indexes may help the decision makers and the software safety analysts to choose the best SSA combination arrange their own software safety plan. By this proposed method, the analysts can evaluate various SSA combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (without transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and Simulation-based model analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantage are the completeness complexity

  12. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  13. Discriminatory components retracing strategy for monitoring the preparation procedure of Chinese patent medicines by fingerprint and chemometric analysis.

    Directory of Open Access Journals (Sweden)

    Shuai Yao

    Full Text Available Chinese patent medicines (CPM, generally prepared from several traditional Chinese medicines (TCMs in accordance with specific process, are the typical delivery form of TCMs in Asia. To date, quality control of CPMs has typically focused on the evaluation of the final products using fingerprint technique and multi-components quantification, but rarely on monitoring the whole preparation process, which was considered to be more important to ensure the quality of CPMs. In this study, a novel and effective strategy labeling "retracing" way based on HPLC fingerprint and chemometric analysis was proposed with Shenkang injection (SKI serving as an example to achieve the quality control of the whole preparation process. The chemical fingerprints were established initially and then analyzed by similarity, principal component analysis (PCA and partial least squares-discriminant analysis (PLS-DA to evaluate the quality and to explore discriminatory components. As a result, the holistic inconsistencies of ninety-three batches of SKIs were identified and five discriminatory components including emodic acid, gallic acid, caffeic acid, chrysophanol-O-glucoside, and p-coumaroyl-O-galloyl-glucose were labeled as the representative targets to explain the retracing strategy. Through analysis of the targets variation in the corresponding semi-products (ninety-three batches, intermediates (thirty-three batches, and the raw materials, successively, the origins of the discriminatory components were determined and some crucial influencing factors were proposed including the raw materials, the coextraction temperature, the sterilizing conditions, and so on. Meanwhile, a reference fingerprint was established and subsequently applied to the guidance of manufacturing. It was suggested that the production process should be standardized by taking the concentration of the discriminatory components as the diagnostic marker to ensure the stable and consistent quality for multi

  14. Analysis of archaeological pieces with nuclear techniques

    International Nuclear Information System (INIS)

    Tenorio, D.

    2002-01-01

    In this work nuclear techniques such as Neutron Activation Analysis, PIXE, X-ray fluorescence analysis, Metallography, Uranium series, Rutherford Backscattering for using in analysis of archaeological specimens and materials are described. Also some published works and thesis about analysis of different Mexican and Meso american archaeological sites are referred. (Author)

  15. Advanced Surveillance, Diagnostic and Prognostic Techniques in Monitoring Structures, Systems and Components in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-15

    direction of research, development and demonstration in this area. The technologies discussed in this project are intended to establish the state of the art in surveillance, diagnostics and prognostics (SDP) technologies for equipment and process health monitoring in nuclear facilities. It is also intended to identify technology gaps and research needs of the nuclear industry in the area of SDP. The report draws on the conventional SDP technologies, as well as the latest tools, algorithms and techniques that have emerged over the last few years, especially in enabling technologies including fast data acquisition, data storage, data qualification and data analysis algorithms, such as empirical and physical modelling techniques. These new tools have made it possible to identify problems earlier and with better resolution. The significance of the material presented in this report is that it contributes not only to the current needs of the nuclear industry but also to the design improvements of the next generation of reactors. For example, the nuclear industry is currently striving to operate the plants for up to 80 years or more, as the value of nuclear assets has risen in recent years, resulting partly from environmental concerns with fossil energy production, as well as increased future demand for base load electricity. This long term operation (LTO) or life extension goal of the nuclear industry has stimulated renewed interest in more frequent monitoring of equipment to guard against ageing effects, not to mention the economic benefits that SDP implementation can produce, and contributions to radiation exposure that is as low as reasonably achievable, reduction of human errors, and optimized maintenance. Together with capabilities that enhance situational awareness, the technologies described in this report will enable more holistic management of plant structures, systems and components (SSCs), maintain high capacity factor in LTO and enable higher levels of safe operation

  16. Advanced Surveillance, Diagnostic and Prognostic Techniques in Monitoring Structures, Systems and Components in Nuclear Power Plants

    International Nuclear Information System (INIS)

    2013-01-01

    direction of research, development and demonstration in this area. The technologies discussed in this project are intended to establish the state of the art in surveillance, diagnostics and prognostics (SDP) technologies for equipment and process health monitoring in nuclear facilities. It is also intended to identify technology gaps and research needs of the nuclear industry in the area of SDP. The report draws on the conventional SDP technologies, as well as the latest tools, algorithms and techniques that have emerged over the last few years, especially in enabling technologies including fast data acquisition, data storage, data qualification and data analysis algorithms, such as empirical and physical modelling techniques. These new tools have made it possible to identify problems earlier and with better resolution. The significance of the material presented in this report is that it contributes not only to the current needs of the nuclear industry but also to the design improvements of the next generation of reactors. For example, the nuclear industry is currently striving to operate the plants for up to 80 years or more, as the value of nuclear assets has risen in recent years, resulting partly from environmental concerns with fossil energy production, as well as increased future demand for base load electricity. This long term operation (LTO) or life extension goal of the nuclear industry has stimulated renewed interest in more frequent monitoring of equipment to guard against ageing effects, not to mention the economic benefits that SDP implementation can produce, and contributions to radiation exposure that is as low as reasonably achievable, reduction of human errors, and optimized maintenance. Together with capabilities that enhance situational awareness, the technologies described in this report will enable more holistic management of plant structures, systems and components (SSCs), maintain high capacity factor in LTO and enable higher levels of safe operation

  17. Component mode synthesis in structural dynamics

    International Nuclear Information System (INIS)

    Reddy, G.R.; Vaze, K.K.; Kushwaha, H.S.

    1993-01-01

    In seismic analysis of Nuclear Reactor Structures and equipments eigen solution requires large computer time. Component mode synthesis is an efficient technique with which one can evaluate dynamic characteristics of a large structure with minimum computer time. Due to this reason it is possible to do a coupled analysis of structure and equipment which takes into account the interaction effects. Basically in this the method large size structure is divided into small substructures and dynamic characteristics of individual substructure are determined. The dynamic characteristics of entire structure are evaluated by synthesising the individual substructure characteristics. Component mode synthesis has been applied in this paper to the analysis of a tall heavy water upgrading tower. Use of fixed interface normal modes, constrained modes, attachment modes in the component mode synthesis using energy principle and using Ritz vectors have been discussed. The validity of this method is established by solving fixed-fixed beam and comparing the results obtained by conventional and classical method. The eigen value problem has been solved using simultaneous iteration method. (author)

  18. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  19. Cytogenetic analysis of quinoa chromosomes using nanoscale imaging and spectroscopy techniques

    Science.gov (United States)

    Yangquanwei, Zhong; Neethirajan, Suresh; Karunakaran, Chithra

    2013-11-01

    Here we present a high-resolution chromosomal spectral map derived from synchrotron-based soft X-ray spectromicroscopy applied to quinoa species. The label-free characterization of quinoa metaphase chromosomes shows that it consists of organized substructures of DNA-protein complex. The analysis of spectra of chromosomes using the scanning transmission X-ray microscope (STXM) and its superposition of the pattern with the atomic force microscopy (AFM) and scanning electron microscopy (SEM) images proves that it is possible to precisely locate the gene loci and the DNA packaging inside the chromosomes. STXM has been successfully used to distinguish and quantify the DNA and protein components inside the quinoa chromosomes by visualizing the interphase at up to 30-nm spatial resolution. Our study represents the successful attempt of non-intrusive interrogation and integrating imaging techniques of chromosomes using synchrotron STXM and AFM techniques. The methodology developed for 3-D imaging of chromosomes with chemical specificity and temporal resolution will allow the nanoscale imaging tools to emerge from scientific research and development into broad practical applications such as gene loci tools and biomarker libraries.

  20. Analysis of Surface Water Pollution in the Kinta River Using Multivariate Technique

    International Nuclear Information System (INIS)

    Hamza Ahmad Isiyaka; Hafizan Juahir

    2015-01-01

    This study aims to investigate the spatial variation in the characteristics of water quality monitoring sites, identify the most significant parameters and the major possible sources of pollution, and apportion the source category in the Kinta River. 31 parameters collected from eight monitoring sites for eight years (2006-2013) were employed. The eight monitoring stations were spatially grouped into three independent clusters in a dendrogram. A drastic reduction in the number of monitored parameters from 31 to eight and nine significant parameters (P<0.05) was achieved using the forward stepwise and backward stepwise discriminate analysis (DA). Principal component analysis (PCA) accounted for more than 76 % in the total variance and attributes the source of pollution to anthropogenic and natural processes. The source apportionment using a combined multiple linear regression and principal component scores indicates that 41 % of the total pollution load is from rock weathering and untreated waste water, 26 % from waste discharge, 24 % from surface runoff and 7 % from faecal waste. This study proposes a reduction in the number of monitoring stations and parameters for a cost effective and time management in the monitoring processes and multivariate technique can provide a simple representation of complex and dynamic water quality characteristics. (author)

  1. A noise reduction technique based on nonlinear kernel function for heart sound analysis.

    Science.gov (United States)

    Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami

    2017-02-13

    The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.

  2. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  3. Separation of GRACE geoid time-variations using Independent Component Analysis

    Science.gov (United States)

    Frappart, F.; Ramillien, G.; Maisongrande, P.; Bonnet, M.

    2009-12-01

    Independent Component Analysis (ICA) is a blind separation method based on the simple assumptions of the independence of the sources and the non-Gaussianity of the observations. An approach based on this numerical method is used here to extract hydrological signals over land and oceans from the polluting striping noise due to orbit repetitiveness and present in the GRACE global mass anomalies. We took advantage of the availability of monthly Level-2 solutions from three official providers (i.e., CSR, JPL and GFZ) that can be considered as different observations of the same phenomenon. The efficiency of the methodology is first demonstrated on a synthetic case. Applied to one month of GRACE solutions, it allows to clearly separate the total water storage change from the meridional-oriented spurious gravity signals on the continents but not on the oceans. This technique gives results equivalent as the destriping method for continental water storage for the hydrological patterns with less smoothing. This methodology is then used to filter the complete series of the 2002-2009 GRACE solutions.

  4. Advanced Techniques of Stress Analysis

    Directory of Open Access Journals (Sweden)

    Simion TATARU

    2013-12-01

    Full Text Available This article aims to check the stress analysis technique based on 3D models also making a comparison with the traditional technique which utilizes a model built directly into the stress analysis program. This comparison of the two methods will be made with reference to the rear fuselage of IAR-99 aircraft, structure with a high degree of complexity which allows a meaningful evaluation of both approaches. Three updated databases are envisaged: the database having the idealized model obtained using ANSYS and working directly on documentation, without automatic generation of nodes and elements (with few exceptions, the rear fuselage database (performed at this stage obtained with Pro/ ENGINEER and the one obtained by using ANSYS with the second database. Then, each of the three databases will be used according to arising necessities.The main objective is to develop the parameterized model of the rear fuselage using the computer aided design software Pro/ ENGINEER. A review of research regarding the use of virtual reality with the interactive analysis performed by the finite element method is made to show the state- of- the-art achieved in this field.

  5. Application of the failure modes and effects analysis technique to the emergency cooling system of an experimental nuclear power plant

    International Nuclear Information System (INIS)

    Conceicao Junior, Osmar; Silva, Antonio Teixeira e

    2009-01-01

    This study consists on the application of the failure modes and effects analysis (FMEA), a hazard identification and a risk assessment technique, to the emergency cooling system (ECS), of an experimental nuclear power plant. The choice of this technique was due to its detailed analysis of each component of the system, enabling the identification of all possible ways of failure and its related consequences (in order of importance), allowing the designer to improve the system, maximizing its security and reliability. Through the application of this methodology, it could be observed that the ECS is an intrinsically safe system, in spite of the modifications proposed. (author)

  6. EXAFS analysis of uranium speciation in plants using the difference technique

    International Nuclear Information System (INIS)

    Rossberg, A.; Reich, T.; Guenther, A.; Bernhard, G.

    2002-01-01

    An EXAFS spectrum is the sum of the signals of scattering contributions from single atoms and multiple scattering (MS) between atoms. The scattering contributions with high intensity are the major and such with low intensity are the minor components in an EXAFS spectrum. A chi-squared fit algorithm is not sensitive enough to the minor components. A difference technique has been developed /1/ to isolate the minor components from the whole EXAFS spectrum to increase the accuracy in determinating of their EXAFS structural parameters. (orig.)

  7. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    Science.gov (United States)

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  8. Computer compensation for NMR quantitative analysis of trace components

    International Nuclear Information System (INIS)

    Nakayama, T.; Fujiwara, Y.

    1981-01-01

    A computer program has been written that determines trace components and separates overlapping components in multicomponent NMR spectra. This program uses the Lorentzian curve as a theoretical curve of NMR spectra. The coefficients of the Lorentzian are determined by the method of least squares. Systematic errors such as baseline/phase distortion are compensated and random errors are smoothed by taking moving averages, so that there processes contribute substantially to decreasing the accumulation time of spectral data. The accuracy of quantitative analysis of trace components has been improved by two significant figures. This program was applied to determining the abundance of 13C and the saponification degree of PVA

  9. Profiling of Piper betle Linn. cultivars by direct analysis in real time mass spectrometric technique.

    Science.gov (United States)

    Bajpai, Vikas; Sharma, Deepty; Kumar, Brijesh; Madhusudanan, K P

    2010-12-01

    Piper betle Linn. is a traditional plant associated with the Asian and southeast Asian cultures. Its use is also recorded in folk medicines in these regions. Several of its medicinal properties have recently been proven. Phytochemical analysis showed the presence of mainly terpenes and phenols in betel leaves. These constituents vary in the different cultivars of Piper betle. In this paper we have attempted to profile eight locally available betel cultivars using the recently developed mass spectral ionization technique of direct analysis in real time (DART). Principal component analysis has also been employed to analyze the DART MS data of these betel cultivars. The results show that the cultivars of Piper betle could be differentiated using DART MS data. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Study on determination of durability analysis process and fatigue damage parameter for rubber component

    International Nuclear Information System (INIS)

    Moon, Seong In; Cho, Il Je; Woo, Chang Su; Kim, Wan Doo

    2011-01-01

    Rubber components, which have been widely used in the automotive industry as anti-vibration components for many years, are subjected to fluctuating loads, often failing due to the nucleation and growth of defects or cracks. To prevent such failures, it is necessary to understand the fatigue failure mechanism for rubber materials and to evaluate the fatigue life for rubber components. The objective of this study is to develop a durability analysis process for vulcanized rubber components, that can predict fatigue life at the initial product design step. The determination method of nonlinear material constants for FE analysis was proposed. Also, to investigate the applicability of the commonly used damage parameters, fatigue tests and corresponding finite element analyses were carried out and normal and shear strain was proposed as the fatigue damage parameter for rubber components. Fatigue analysis for automotive rubber components was performed and the durability analysis process was reviewed

  11. What’s hampering measurement invariance: Detecting non-invariant items using clusterwise simultaneous component analysis

    Directory of Open Access Journals (Sweden)

    Kim eDe Roover

    2014-06-01

    Full Text Available The issue of measurement invariance is ubiquitous in the behavioral sciences nowadays as more and more studies yield multivariate multigroup data. When measurement invariance cannot be established across groups, this is often due to different loadings on only a few items. Within the multigroup CFA framework, methods have been proposed to trace such non-invariant items, but these methods have some disadvantages in that they require researchers to run a multitude of analyses and in that they imply assumptions that are often questionable. In this paper, we propose an alternative strategy which builds on clusterwise simultaneous component analysis (SCA. Clusterwise SCA, being an exploratory technique, assigns the groups under study to a few clusters based on differences and similarities in the covariance matrices, and thus based on the component structure of the items. Non-invariant items can then be traced by comparing the cluster-specific component loadings via congruence coefficients, which is far more parsimonious than comparing the component structure of all separate groups. In this paper we present a heuristic for this procedure. Afterwards, one can return to the multigroup CFA framework and check whether removing the non-invariant items or removing some of the equality restrictions for these items, yields satisfactory invariance test results. An empirical application concerning cross-cultural emotion data is used to demonstrate that this novel approach is useful and can co-exist with the traditional CFA approaches.

  12. A new analysis technique for microsamples

    International Nuclear Information System (INIS)

    Boyer, R.; Journoux, J.P.; Duval, C.

    1989-01-01

    For many decades, isotopic analysis of Uranium or Plutonium has been performed by mass spectrometry. The most recent analytical techniques, using the counting method or a plasma torch combined with a mass spectrometer (ICP.MS) have not yet to reach a greater degree of precision than the older methods in this field. The two means of ionization for isotopic analysis - by electronic bombardment of atoms or molecules (source of gas ions) and - by thermal effect (thermoionic source) are compared revealing some inconsistency between the quantity of sample necessary for analysis and the luminosity. In fact, the quantity of sample necessary for the gas source mass spectrometer is 10 to 20 times greater than that for the thermoionization spectrometer, while the sample consumption is between 10 5 to 10 6 times greater. This proves that almost the entire sample is not necessary for the measurement; it is only required because of the system of introduction for the gas spectrometer. The new analysis technique referred to as ''Microfluorination'' corrects this anomaly and exploits the advantages of the electron bombardment method of ionization

  13. A component analysis of positive behaviour support plans.

    Science.gov (United States)

    McClean, Brian; Grey, Ian

    2012-09-01

    Positive behaviour support (PBS) emphasises multi-component interventions by natural intervention agents to help people overcome challenging behaviours. This paper investigates which components are most effective and which factors might mediate effectiveness. Sixty-one staff working with individuals with intellectual disability and challenging behaviours completed longitudinal competency-based training in PBS. Each staff participant conducted a functional assessment and developed and implemented a PBS plan for one prioritised individual. A total of 1,272 interventions were available for analysis. Measures of challenging behaviour were taken at baseline, after 6 months, and at an average of 26 months follow-up. There was a significant reduction in the frequency, management difficulty, and episodic severity of challenging behaviour over the duration of the study. Escape was identified by staff as the most common function, accounting for 77% of challenging behaviours. The most commonly implemented components of intervention were setting event changes and quality-of-life-based interventions. Only treatment acceptability was found to be related to decreases in behavioural frequency. No single intervention component was found to have a greater association with reductions in challenging behaviour.

  14. Motion Capture Technique Applied Research in Sports Technique Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhiwu LIU

    2014-09-01

    Full Text Available The motion capture technology system definition is described in the paper, and its components are researched, the key parameters are obtained from motion technique, the quantitative analysis are made on technical movements, the method of motion capture technology is proposed in sport technical diagnosis. That motion capture step includes calibration system, to attached landmarks to the tester; to capture trajectory, and to analyze the collected data.

  15. Handbook of Qualitative Research Techniques and Analysis in Entrepreneurship

    DEFF Research Database (Denmark)

    One of the most challenging tasks in the research design process is choosing the most appropriate data collection and analysis techniques. This Handbook provides a detailed introduction to five qualitative data collection and analysis techniques pertinent to exploring entreprneurial phenomena....

  16. New approach based on fuzzy logic and principal component analysis for the classification of two-dimensional maps in health and disease. Application to lymphomas.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Righetti, Pier Giorgio; Antonucci, Francesca

    2003-07-04

    Two-dimensional (2D) electrophoresis is the most wide spread technique for the separation of proteins in biological systems. This technique produces 2D maps of high complexity, which creates difficulties in the comparison of different samples. The method proposed in this paper for the comparison of different 2D maps can be summarised in four steps: (a) digitalisation of the image; (b) fuzzyfication of the digitalised map in order to consider the variability of the two-dimensional electrophoretic separation; (c) decoding by principal component analysis of the previously obtained fuzzy maps, in order to reduce the system dimensionality; (d) classification analysis (linear discriminant analysis), in order to separate the samples contained in the dataset according to the classes present in said dataset. This method was applied to a dataset constituted by eight samples: four belonging to healthy human lymph-nodes and four deriving from non-Hodgkin lymphomas. The amount of fuzzyfication of the original map is governed by the sigma parameter. The larger the value, the more fuzzy theresulting transformed map. The effect of the fuzzyfication parameter was investigated, the optimal results being obtained for sigma = 1.75 and 2.25. Principal component analysis and linear discriminant analysis allowed the separation of the two classes of samples without any misclassification.

  17. Characterization of Deficiencies in the Frequency Domain Forced Response Analysis Technique for Turbine Bladed Disks

    Science.gov (United States)

    Brown, Andrew M.; Schmauch, Preston

    2012-01-01

    Turbine blades in rocket and jet engine turbomachinery experience enormous harmonic loading conditions. These loads result from the integer number of upstream and downstream stator vanes as well as the other turbine stages. The standard technique for forced response analysis to assess structural integrity is to decompose a CFD generated flow field into its harmonic components, and to then perform a frequency response analysis at the problematic natural frequencies. Recent CFD analysis and water-flow testing at NASA/MSFC, though, indicates that this technique may miss substantial harmonic and non-harmonic excitation sources that become present in complex flows. These complications suggest the question of whether frequency domain analysis is capable of capturing the excitation content sufficiently. Two studies comparing frequency response analysis with transient response analysis, therefore, have been performed. The first is of a bladed disk with each blade modeled by simple beam elements. It was hypothesized that the randomness and other variation from the standard harmonic excitation would reduce the blade structural response, but the results showed little reduction. The second study was of a realistic model of a bladed-disk excited by the same CFD used in the J2X engine program. The results showed that the transient analysis results were up to 10% higher for "clean" nodal diameter excitations and six times larger for "messy" excitations, where substantial Fourier content around the main harmonic exists.

  18. Sampling and chemical analysis of smoke gas components from the SP Industry Calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Maansson, M.; Blomqvist, P.; Isaksson, I.; Rosell, L.

    1995-12-31

    This report describes the sampling and chemical analyses of smoke gas components for combustion performed in the SP Industry Calorimeter, where continuous measurements of oxygen, carbon dioxide and carbon monoxide are an integrated part of the Calorimeter system. On-line measurements of nitrogen oxides and total amounts of unburnt hydrocarbons were performed. Hydrogen cyanide, hydrogen chloride and ammonia in the smoke were sampled and absorbed in impinger bottles and subsequently analyzed using wet chemical techniques. An adsorbent sampling system was designed to allow the identification and quantitative analysis of individual organic compounds in the smoke. Gas chromatography was utilized with a mass spectrometric detector for the identification and a FID for quantification of the total amounts as well as individual components. A procedure for cleaning the smoke gas duct in between the combustion experiments was designed and found to be effective. The materials studied were Nylon 66, polypropylene, polystyrene (with and without fire retardant), PVC, and chlorobenzene. A total of 19 large-scale tests were carried out. The mass of sample burnt ranged from 20 kg to 125 kg in an experiment. 14 refs, 11 tabs

  19. Principal Component Analysis of Working Memory Variables during Child and Adolescent Development.

    Science.gov (United States)

    Barriga-Paulino, Catarina I; Rodríguez-Martínez, Elena I; Rojas-Benjumea, María Ángeles; Gómez, Carlos M

    2016-10-03

    Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors.

  20. Quality assurance techniques for activation analysis

    International Nuclear Information System (INIS)

    Becker, D.A.

    1984-01-01

    The principles and techniques of quality assurance are applied to the measurement method of activation analysis. Quality assurance is defined to include quality control and quality assessment. Plans for quality assurance include consideration of: personnel; facilities; analytical design; sampling and sample preparation; the measurement process; standards; and documentation. Activation analysis concerns include: irradiation; chemical separation; counting/detection; data collection, and analysis; and calibration. Types of standards discussed include calibration materials and quality assessment materials

  1. Dependence between fusion temperatures and chemical components of a certain type of coal using classical, non-parametric and bootstrap techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Manteiga, W.; Prada-Sanchez, J.M.; Fiestras-Janeiro, M.G.; Garcia-Jurado, I. (Universidad de Santiago de Compostela, Santiago de Compostela (Spain). Dept. de Estadistica e Investigacion Operativa)

    1990-11-01

    A statistical study of the dependence between various critical fusion temperatures of a certain kind of coal and its chemical components is carried out. As well as using classical dependence techniques (multiple, stepwise and PLS regression, principal components, canonical correlation, etc.) together with the corresponding inference on the parameters of interest, non-parametric regression and bootstrap inference are also performed. 11 refs., 3 figs., 8 tabs.

  2. Identify the Effective Wells in Determination of Groundwater Depth in Urmia Plain Using Principle Component Analysis

    Directory of Open Access Journals (Sweden)

    Sahar Babaei Hessar

    2017-06-01

    Full Text Available Introduction: Groundwater is the most important resource of providing sanitary water for potable and household consumption. So continuous monitoring of groundwater level will play an important role in water resource management. But because of the large amount of information, evaluation of water table is a costly and time consuming process. Therefore, in many studies, the data and information aren’t suitable and useful and so, must be neglected. The PCA technique is an optimized mathematical method that reserve data with the highest share in affirming variance with recognizing less important data and limits the original variables into to a few components. In this technique, variation factors called principle components are identified with considering data structures. Thus, variables those have the highest correlation coefficient with principal components are extracted as a result of identifying the components that create the greatest variance. Materials and Methods: The study region has an area of approximately 962 Km2 and area located between 37º 21´ N to 37º 49´ N and 44º 57´ E to 45º 16´ E in West Azerbaijan province of Iran. This area placed along the mountainous north-west of the country, which ends with the plane Urmia Lake and has vast groundwater resources. However, recently the water table has been reduced considerably because of the exceeded exploitation as a result of urbanization and increased agricultural and horticultural land uses. In the present study, the annual water table datasets in 51wells monitored by Ministry of Energy during statistical periods of 2002-2011 were used to data analysis. In order to identify the effective wells in determination of groundwater level, the PCA technique was used. In this research to compute the relative importance of each well, 10 wells were identified with the nearest neighbor for each one. The number of wells (p as a general rule must be less or equal to the maximum number of

  3. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  4. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  5. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ho Yang [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Kim, Ki Bok [Chungnam National University, Daejeon (Korea, Republic of)

    2003-06-15

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  6. Analysis and Classification of Acoustic Emission Signals During Wood Drying Using the Principal Component Analysis

    International Nuclear Information System (INIS)

    Kang, Ho Yang; Kim, Ki Bok

    2003-01-01

    In this study, acoustic emission (AE) signals due to surface cracking and moisture movement in the flat-sawn boards of oak (Quercus Variablilis) during drying under the ambient conditions were analyzed and classified using the principal component analysis. The AE signals corresponding to surface cracking showed higher in peak amplitude and peak frequency, and shorter in rise time than those corresponding to moisture movement. To reduce the multicollinearity among AE features and to extract the significant AE parameters, correlation analysis was performed. Over 99% of the variance of AE parameters could be accounted for by the first to the fourth principal components. The classification feasibility and success rate were investigated in terms of two statistical classifiers having six independent variables (AE parameters) and six principal components. As a result, the statistical classifier having AE parameters showed the success rate of 70.0%. The statistical classifier having principal components showed the success rate of 87.5% which was considerably than that of the statistical classifier having AE parameters

  7. Nonparametric inference in nonlinear principal components analysis : exploration and beyond

    NARCIS (Netherlands)

    Linting, Mariëlle

    2007-01-01

    In the social and behavioral sciences, data sets often do not meet the assumptions of traditional analysis methods. Therefore, nonlinear alternatives to traditional methods have been developed. This thesis starts with a didactic discussion of nonlinear principal components analysis (NLPCA),

  8. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    Science.gov (United States)

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Design and Fabrication Technique of the Key Components for Very High Temperature Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ho Jin; Song, Ki Nam; Kim, Yong Wan

    2006-12-15

    The gas outlet temperature of Very High Temperature Reactor (VHTR) may be beyond the capability of conventional metallic materials. The requirement of the gas outlet temperature of 950 .deg. C will result in operating temperatures for metallic core components that will approach very high temperature on some cases. The materials that are capable of withstanding this temperature should be prepared, or nonmetallic materials will be required for limited components. The Ni-base alloys such as Alloy 617, Hastelloy X, XR, Incoloy 800H, and Haynes 230 are being investigated to apply them on components operated in high temperature. Currently available national and international codes and procedures are needed reviewed to design the components for HTGR/VHTR. Seven codes and procedures, including five ASME Codes and Code cases, one French code (RCC-MR), and on British Procedure (R5) were reviewed. The scope of the code and code cases needs to be expanded to include the materials with allowable temperatures of 950 .deg. C and higher. The selection of compact heat exchangers technology depends on the operating conditions such as pressure, flow rates, temperature, but also on other parameters such as fouling, corrosion, compactness, weight, maintenance and reliability. Welding, brazing, and diffusion bonding are considered proper joining processes for the heat exchanger operating in the high temperature and high pressure conditions without leakage. Because VHTRs require high temperature operations, various controlled materials, thick vessels, dissimilar metal joints, and precise controls of microstructure in weldment, the more advanced joining processes are needed than PWRs. The improved solid joining techniques are considered for the IHX fabrication. The weldability for Alloy 617 and Haynes 230 using GTAW and SMAW processes was investigated by CEA.

  10. Design and Fabrication Technique of the Key Components for Very High Temperature Reactor

    International Nuclear Information System (INIS)

    Lee, Ho Jin; Song, Ki Nam; Kim, Yong Wan

    2006-12-01

    The gas outlet temperature of Very High Temperature Reactor (VHTR) may be beyond the capability of conventional metallic materials. The requirement of the gas outlet temperature of 950 .deg. C will result in operating temperatures for metallic core components that will approach very high temperature on some cases. The materials that are capable of withstanding this temperature should be prepared, or nonmetallic materials will be required for limited components. The Ni-base alloys such as Alloy 617, Hastelloy X, XR, Incoloy 800H, and Haynes 230 are being investigated to apply them on components operated in high temperature. Currently available national and international codes and procedures are needed reviewed to design the components for HTGR/VHTR. Seven codes and procedures, including five ASME Codes and Code cases, one French code (RCC-MR), and on British Procedure (R5) were reviewed. The scope of the code and code cases needs to be expanded to include the materials with allowable temperatures of 950 .deg. C and higher. The selection of compact heat exchangers technology depends on the operating conditions such as pressure, flow rates, temperature, but also on other parameters such as fouling, corrosion, compactness, weight, maintenance and reliability. Welding, brazing, and diffusion bonding are considered proper joining processes for the heat exchanger operating in the high temperature and high pressure conditions without leakage. Because VHTRs require high temperature operations, various controlled materials, thick vessels, dissimilar metal joints, and precise controls of microstructure in weldment, the more advanced joining processes are needed than PWRs. The improved solid joining techniques are considered for the IHX fabrication. The weldability for Alloy 617 and Haynes 230 using GTAW and SMAW processes was investigated by CEA

  11. Dynamic Principal Component Analysis with Nonoverlapping Moving Window and Its Applications to Epileptic EEG Classification

    Directory of Open Access Journals (Sweden)

    Shengkun Xie

    2014-01-01

    Full Text Available Classification of electroencephalography (EEG is the most useful diagnostic and monitoring procedure for epilepsy study. A reliable algorithm that can be easily implemented is the key to this procedure. In this paper a novel signal feature extraction method based on dynamic principal component analysis and nonoverlapping moving window is proposed. Along with this new technique, two detection methods based on extracted sparse features are applied to deal with signal classification. The obtained results demonstrated that our proposed methodologies are able to differentiate EEGs from controls and interictal for epilepsy diagnosis and to separate EEGs from interictal and ictal for seizure detection. Our approach yields high classification accuracy for both single-channel short-term EEGs and multichannel long-term EEGs. The classification performance of the method is also compared with other state-of-the-art techniques on the same datasets and the effect of signal variability on the presented methods is also studied.

  12. PM10 and gaseous pollutants trends from air quality monitoring networks in Bari province: principal component analysis and absolute principal component scores on a two years and half data set

    Science.gov (United States)

    2014-01-01

    Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in

  13. Intelligent Prediction of Soccer Technical Skill on Youth Soccer Player's Relative Performance Using Multivariate Analysis and Artificial Neural Network Techniques

    OpenAIRE

    Abdullah, M. R; Maliki, A. B. H. M; Musa, R. M; Kosni, N. A; Juahir, H

    2016-01-01

    This study aims to predict the potential pattern of soccer technical skill on Malaysia youth soccer players relative performance using multivariate analysis and artificial neural network techniques. 184 male youth soccer players were recruited in Malaysia soccer academy (average age = 15.2±2.0) underwent to, physical fitness test, anthropometric, maturity, motivation and the level of skill related soccer. Unsupervised pattern recognition of principal component analysis (PCA) was used to ident...

  14. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  15. Statistical techniques applied to aerial radiometric surveys (STAARS): series introduction and the principal-components-analysis method

    International Nuclear Information System (INIS)

    Pirkle, F.L.

    1981-04-01

    STAARS is a new series which is being published to disseminate information concerning statistical procedures for interpreting aerial radiometric data. The application of a particular data interpretation technique to geologic understanding for delineating regions favorable to uranium deposition is the primary concern of STAARS. Statements concerning the utility of a technique on aerial reconnaissance data as well as detailed aerial survey data will be included

  16. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  17. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    Science.gov (United States)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  18. The role of damage analysis in the assessment of service-exposed components

    International Nuclear Information System (INIS)

    Bendick, W.; Muesch, H.; Weber, H.

    1987-01-01

    Components in power stations are subjected to service conditions under which creep processes take place limiting the component's lifetime by material exhaustion. To ensure a safe and economic plant operation it is necessary to get information about the exhaustion grade of single components as well as of the whole plant. A comprehensive lifetime assessment requests the complete knowledge of the service parameters, the component's deformtion behavior, and the change in material properties caused by longtime exposure to high service temperatures. A basis of evaluation is given by: 1) determination of material exhaustion by calculation, 2) investigation of the material properties, and 3) damage analysis. The purpose of this report is to show the role which damage analysis can play in the assessment of service-exposed components. As an example the test results of a damaged pipe bend will be discussed. (orig./MM)

  19. Techniques for the thermal/hydraulic analysis of LMFBR check valves

    International Nuclear Information System (INIS)

    Cho, S.M.; Kane, R.S.

    1979-01-01

    A thermal/hydraulic analysis of the check valves in liquid sodium service for LMFBR plants is required to provide temperature data for thermal stress analysis of the valves for specified transient conditions. Because of the complex three-dimensional flow pattern within the valve, the heat transfer analysis techniques for less complicated shapes could not be used. This paper discusses the thermal analysis techniques used to assure that the valve stress analysis is conservative. These techniques include a method for evaluating the recirculating flow patterns and for selecting appropriately conservative heat transfer correlations in various regions of the valve

  20. Small area analysis using micro-diffraction techniques

    International Nuclear Information System (INIS)

    Goehner, Raymond P.; Tissot, Ralph G. Jr.; Michael, Joseph R.

    2000-01-01

    An overall trend toward smaller electronic packages and devices makes it increasingly important and difficult to obtain meaningful diffraction information from small areas. X-ray micro-diffraction, electron back-scattered diffraction (EBSD) and Kossel are micro-diffraction techniques used for crystallographic analysis including texture, phase identification and strain measurements. X-ray micro-diffraction primarily is used for phase analysis and residual strain measurements. X-ray micro-diffraction primarily is used for phase analysis and residual strain measurements of areas between 10 microm to 100 microm. For areas this small glass capillary optics are used for producing a usable collimated x-ray beam. These optics are designed to reflect x-rays below the critical angle therefore allowing for larger solid acceptance angle at the x-ray source resulting in brighter smaller x-ray beams. The determination of residual strain using micro-diffraction techniques is very important to the semiconductor industry. Residual stresses have caused voiding of the interconnect metal which then destroys electrical continuity. Being able to determine the residual stress helps industry to predict failures from the aging effects of interconnects due to this stress voiding. Stress measurements would be impossible using a conventional x-ray diffractometer; however, utilizing a 30 microm glass capillary these small areas are readily assessable for analysis. Kossel produces a wide angle diffraction pattern from fluorescent x-rays generated in the sample by an e-beam in a SEM. This technique can yield very precise lattice parameters for determining strain. Fig. 2 shows a Kossel pattern from a Ni specimen. Phase analysis on small areas is also possible using an energy dispersive spectrometer (EBSD) and x-ray micro-diffraction techniques. EBSD has the advantage of allowing the user to observe the area of interest using the excellent imaging capabilities of the SEM. An EDS detector has been

  1. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  2. Analysis of spiral components in 16 galaxies

    International Nuclear Information System (INIS)

    Considere, S.; Athanassoula, E.

    1988-01-01

    A Fourier analysis of the intensity distributions in the plane of 16 spiral galaxies of morphological types from 1 to 7 is performed. The galaxies processed are NGC 300,598,628,2403,2841,3031,3198,3344,5033,5055,5194,5247,6946,7096,7217, and 7331. The method, mathematically based upon a decomposition of a distribution into a superposition of individual logarithmic spiral components, is first used to determine for each galaxy the position angle PA and the inclination ω of the galaxy plane onto the sky plane. Our results, in good agreement with those issued from different usual methods in the literature, are discussed. The decomposition of the deprojected galaxies into individual spiral components reveals that the two-armed component is everywhere dominant. Our pitch angles are then compared to the previously published ones and their quality is checked by drawing each individual logarithmic spiral on the actual deprojected galaxy images. Finally, the surface intensities for angular periodicities of interest are calculated. A choice of a few of the most important ones is used to elaborate a composite image well representing the main spiral features observed in the deprojected galaxies

  3. Elemental content in cigarette components and its distribution as determined by neutron activation analysis

    International Nuclear Information System (INIS)

    Metwally, Y.E.

    2003-01-01

    Cigarette smoking, a world-wide habit. Has a very bad and hazardous effects on the human body. In the present study, different kinds of cigarette brands have been collected from local and foreign markets representing ten countries all over the world. All the selected samples were irradiated in first Inshas reactor (IR-1) in Egypt. A comprehensive study of the elemental contents in cigarette samples under investigation has been carried out by using neutron activation analysis technique. Concentrations of the polluting elements and tracer contents of cigarette tobacco have been determined. The elemental distribution of the component (tobacco. ash. filter before and after smoking and wrapping paper) of some kinds of cigarettes has been studied. The obtained data resulting from the present work were discussed

  4. Examination, characterisation and analysis techniques for the knowledge and the conservation / restoration of cultural heritage - importance of ionising radiation techniques

    International Nuclear Information System (INIS)

    Boutaine; J. L.

    2004-01-01

    For the examination, characterisation and analysis of cultural heritage artefacts or art objects and their component materials, the conservation scientist needs a palette of non destructive and non invasive techniques, in order to improve our knowledge concerning their elaboration, their evolution and/or degradation during time, and to give rational basis for their restoration and conservation. A general survey and illustrations showing the usefulness of these techniques will be presented. Among these methods, many are based on the use of ionising radiation. 1. Radiography (using X-rays, gamma rays, beta particles, secondary electrons, neutrons), electron emission radiography, tomodensimetry, 2. Scanning electron microscope associated with X-ray spectrometry, 3. X-ray diffraction, 4. Synchrotron radiation characterisation, 5. X-ray fluorescence analysis, 6. Activation analysis, 7. Ion beam analysis (PIXE, PIGE, RBS, secondary X-ray fluorescence), 8. Thermoluminescence dating, 9. Carbon-14 dating. These methods are used alone or in connection with other analytical methods. Any kind of materials can be encountered, for instance: i. stones, gems, ceramics, terracotta, enamels, glasses, i i. wood, paper, textile, bone, ivory, i i i. metals, jewellery, i v. paint layers, canvas and wooden backings, pigments, dyers, oils, binding media, varnishes, glues. Some examples will be taken, among recent work done at the Centre of Research and Restoration of the Museums of France (C2RMF), from various geographical origins, various ages and different art disciplines. This will illustrate the kind of assistance that science and technology can provide to a better knowledge of mankind's cultural heritage and also to the establishment of rational basis for its better conservation for the future generations. (Author)

  5. Characterization of exposure to extremely low frequency magnetic fields using multidimensional analysis techniques.

    Science.gov (United States)

    Verrier, A; Souques, M; Wallet, F

    2005-05-01

    Our lack of knowledge about the biological mechanisms of 50 Hz magnetic fields makes it hard to improve exposure assessment. To provide better information about these exposure measures, we use multidimensional analysis techniques to examine the relations between different exposure metrics for a group of subjects. We used a combination of a two stage Principal Component Analysis (PCA) followed by an ascending hierarchical classification (AHC) to identify a set of measures that would capture the characteristics of the total exposure. This analysis gives an indication of the aspects of the exposure that are important to capture to get a complete picture of the magnetic field environment. We calculated 44 metrics of exposure measures from 16 exposed EDF employees and 15 control subjects, containing approximately 20,000 recordings of magnetic field measurements, taken every 30 s for 7 days with an EMDEX II dosimeter. These metrics included parameters used routinely or occasionally and some that were new. To eliminate those that expressed the least variability and that were most highly correlated to one another, we began with an initial Principal Component Analysis (PCA). A second PCA of the remaining 12 metrics enabled us to identify from the foreground 82.7% of the variance: the first component (62.0%) was characterized by central tendency metrics, and the second (20.7%) by dispersion characteristics. We were able to use AHC to divide the entire sample (of individuals) into four groups according to the axes that emerged from the PCA. Finally, discriminant analysis tested the discriminant power of the variables in the exposed/control classification as well as those from the AHC classification. The first showed that two subjects had been incorrectly classified, while no classification error was observed in the second. This exploratory study underscores the need to improve exposure measures by using at least two dimensions: intensity and dispersion. It also indicates the

  6. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  7. Structural analysis of nuclear components

    International Nuclear Information System (INIS)

    Ikonen, K.; Hyppoenen, P.; Mikkola, T.; Noro, H.; Raiko, H.; Salminen, P.; Talja, H.

    1983-05-01

    THe report describes the activities accomplished in the project 'Structural Analysis Project of Nuclear Power Plant Components' during the years 1974-1982 in the Nuclear Engineering Laboratory at the Technical Research Centre of Finland. The objective of the project has been to develop Finnish expertise in structural mechanics related to nuclear engineering. The report describes the starting point of the research work, the organization of the project and the research activities on various subareas. Further the work done with computer codes is described and also the problems which the developed expertise has been applied to. Finally, the diploma works, publications and work reports, which are mainly in Finnish, are listed to give a view of the content of the project. (author)

  8. Microextraction sample preparation techniques in biomedical analysis.

    Science.gov (United States)

    Szultka, Malgorzata; Pomastowski, Pawel; Railean-Plugaru, Viorica; Buszewski, Boguslaw

    2014-11-01

    Biologically active compounds are found in biological samples at relatively low concentration levels. The sample preparation of target compounds from biological, pharmaceutical, environmental, and food matrices is one of the most time-consuming steps in the analytical procedure. The microextraction techniques are dominant. Metabolomic studies also require application of proper analytical technique for the determination of endogenic metabolites present in biological matrix on trace concentration levels. Due to the reproducibility of data, precision, relatively low cost of the appropriate analysis, simplicity of the determination, and the possibility of direct combination of those techniques with other methods (combination types on-line and off-line), they have become the most widespread in routine determinations. Additionally, sample pretreatment procedures have to be more selective, cheap, quick, and environmentally friendly. This review summarizes the current achievements and applications of microextraction techniques. The main aim is to deal with the utilization of different types of sorbents for microextraction and emphasize the use of new synthesized sorbents as well as to bring together studies concerning the systematic approach to method development. This review is dedicated to the description of microextraction techniques and their application in biomedical analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Dynamic analysis of the radiolysis of binary component system

    International Nuclear Information System (INIS)

    Katayama, M.; Trumbore, C.N.

    1975-01-01

    Dynamic analysis was performed on a variety of combinations of components in the radiolysis of binary system, taking the hydrogen-producing reaction with hydrocarbon RH 2 as an example. A definite rule was able to be established from this analysis, which is useful for revealing the reaction mechanism. The combinations were as follows: 1) both components A and B do not interact but serve only as diluents, 2) A is a diluent, and B is a radical captor, 3) both A and B are radical captors, 4-1) A is a diluent, and B decomposes after the reception of the exciting energy of A, 4-2) A is a diluent, and B does not participate in decomposition after the reception of the exciting energy of A, 5-1) A is a radical captor, and B decomposes after the reception of the exciting energy of A, 5-2) A is a radical captor, and B does not participate in decomposition after the reception of the exciting energy of A, 6-1) both A and B decompose after the reception of the exciting energy of the partner component; and 6-2) both A and B do not decompose after the reception of the exciting energy of the partner component. According to the dynamical analysis of the above nine combinations, it can be pointed out that if excitation transfer participates, the similar phenomena to radical capture are presented apparently. It is desirable to measure the yield of radicals experimentally with the system which need not much consideration to the excitation transfer. Isotope substitution mixture system is conceived as one of such system. This analytical method was applied to the system containing cyclopentanone, such as cyclopentanone-cyclohexane system. (Iwakiri, K.)

  10. Brand Components in Electoral Debates: Presidential Elections Romania 2009

    Directory of Open Access Journals (Sweden)

    Ovidiu-Aurel GHIUȚĂ

    2015-06-01

    Full Text Available The present paper illustrates the use of the brand and its components in the most important campaign debate of the presidential elections in Romania in 2009.The research method we have selected is the case study. The research technique consisted of the content analysis of the two speeches. The conducted analysis has included all the three types of the content analysis: conceptual, relational and qualitative analysis. The content analysis has been conducted by using the Nvivo software. The identification of the candidate’s brand in a single debate particularly entails its presence throughout the electoral campaign. We can outline the main component of the brand notion the two candidates have resorted to in this debate: Băsescu resorted to positioning, while Antonescu opted for differentiation and positioning.

  11. Component Pin Recognition Using Algorithms Based on Machine Learning

    Science.gov (United States)

    Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang

    2018-04-01

    The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.

  12. Statistical techniques for the identification of reactor component structural vibrations

    International Nuclear Information System (INIS)

    Kemeny, L.G.

    1975-01-01

    The identification, on-line and in near real-time, of the vibration frequencies, modes and amplitudes of selected key reactor structural components and the visual monitoring of these phenomena by nuclear power plant operating staff will serve to further the safety and control philosophy of nuclear systems and lead to design optimisation. The School of Nuclear Engineering has developed a data acquisition system for vibration detection and identification. The system is interfaced with the HIFAR research reactor of the Australian Atomic Energy Commission. The reactor serves to simulate noise and vibrational phenomena which might be pertinent in power reactor situations. The data acquisition system consists of a small computer interfaced with a digital correlator and a Fourier transform unit. An incremental tape recorder is utilised as a backing store and as a means of communication with other computers. A small analogue computer and an analogue statistical analyzer can be used in the pre and post computational analysis of signals which are received from neutron and gamma detectors, thermocouples, accelerometers, hydrophones and strain gauges. Investigations carried out to date include a study of the role of local and global pressure fields due to turbulence in coolant flow and pump impeller induced perturbations on (a) control absorbers, (B) fuel element and (c) coolant external circuit and core tank structure component vibrations. (Auth.)

  13. Performance Evaluation of Components Using a Granularity-based Interface Between Real-Time Calculus and Timed Automata

    Directory of Open Access Journals (Sweden)

    Karine Altisen

    2010-06-01

    Full Text Available To analyze complex and heterogeneous real-time embedded systems, recent works have proposed interface techniques between real-time calculus (RTC and timed automata (TA, in order to take advantage of the strengths of each technique for analyzing various components. But the time to analyze a state-based component modeled by TA may be prohibitively high, due to the state space explosion problem. In this paper, we propose a framework of granularity-based interfacing to speed up the analysis of a TA modeled component. First, we abstract fine models to work with event streams at coarse granularity. We perform analysis of the component at multiple coarse granularities and then based on RTC theory, we derive lower and upper bounds on arrival patterns of the fine output streams using the causality closure algorithm. Our framework can help to achieve tradeoffs between precision and analysis time.

  14. Temporal and Spatial Independent Component Analysis for fMRI Data Sets Embedded in the AnalyzeFMRI R Package

    Directory of Open Access Journals (Sweden)

    Pierre Lafaye de Micheaux

    2011-10-01

    Full Text Available For statistical analysis of functional magnetic resonance imaging (fMRI data sets, we propose a data-driven approach based on independent component analysis (ICA implemented in a new version of the AnalyzeFMRI R package. For fMRI data sets, spatial dimension being much greater than temporal dimension, spatial ICA is the computationally tractable approach generally proposed. However, for some neuroscientific applications, temporal independence of source signals can be assumed and temporal ICA becomes then an attractive exploratory technique. In this work, we use a classical linear algebra result ensuring the tractability of temporal ICA. We report several experiments on synthetic data and real MRI data sets that demonstrate the potential interest of our R package.

  15. Kinematics analysis technique fouettes 720° classic ballet.

    Directory of Open Access Journals (Sweden)

    Li Bo

    2011-07-01

    Full Text Available Athletics practice proved that the more complex the item, the more difficult technique of the exercises. Fouettes at 720° one of the most difficult types of the fouettes. Its implementation is based on high technology during rotation of the performer. To perform this element not only requires good physical condition of the dancer, but also requires possession correct technique dancer. On the basis corresponding kinematic theory in this study, qualitative analysis and quantitative assessment of fouettes at 720 by the best Chinese dancers. For analysis, was taken the method of stereoscopic images and the theoretical analysis.

  16. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    Science.gov (United States)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  17. [Sample preparation methods for chromatographic analysis of organic components in atmospheric particulate matter].

    Science.gov (United States)

    Hao, Liang; Wu, Dapeng; Guan, Yafeng

    2014-09-01

    The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.

  18. Novel approach of wavelet analysis for nonlinear ultrasonic measurements and fatigue assessment of jet engine components

    Science.gov (United States)

    Bunget, Gheorghe; Tilmon, Brevin; Yee, Andrew; Stewart, Dylan; Rogers, James; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz; Pepi, Marc; Ghoshal, Anindya

    2018-04-01

    Widespread damage in aging aircraft is becoming an increasing concern as both civil and military fleet operators are extending the service lifetime of their aircraft. Metallic components undergoing variable cyclic loadings eventually fatigue and form dislocations as precursors to ultimate failure. In order to characterize the progression of fatigue damage precursors (DP), the acoustic nonlinearity parameter is measured as the primary indicator. However, using proven standard ultrasonic technology for nonlinear measurements presents limitations for settings outside of the laboratory environment. This paper presents an approach for ultrasonic inspection through automated immersion scanning of hot section engine components where mature ultrasonic technology is used during periodic inspections. Nonlinear ultrasonic measurements were analyzed using wavelet analysis to extract multiple harmonics from the received signals. Measurements indicated strong correlations of nonlinearity coefficients and levels of fatigue in aluminum and Ni-based superalloys. This novel wavelet cross-correlation (WCC) algorithm is a potential technique to scan for fatigue damage precursors and identify critical locations for remaining life prediction.

  19. Body composition analysis techniques in adult and pediatric patients: how reliable are they? How useful are they clinically?

    Science.gov (United States)

    Woodrow, Graham

    2007-06-01

    Complex abnormalities of body composition occur in peritoneal dialysis (PD). These abnormalities reflect changes in hydration, nutrition, and body fat, and they are of major clinical significance. Clinical assessment of these body compartments is insensitive and inaccurate. Frequently, simultaneous changes of hydration, wasting, and body fat content can occur, confounding clinical assessment of each component. Body composition can be described by models of varying complexity that use one or more measurement techniques. "Gold standard" methods provide accurate and precise data, but are not practical for routine clinical use. Dual energy X-ray absorptiometry allows for measurement of regional as well as whole-body composition, which can provide further information of clinical relevance. Simpler techniques such as anthropometry and bioelectrical impedance analysis are suited to routine use in clinic or at the bedside, but may be less accurate. Body composition methodology sometimes makes assumptions regarding relationships between components, particularly in regard to hydration, which may be invalid in pathologic states. Uncritical application of these methods to the PD patient may result in erroneous interpretation of results. Understanding the foundations and limitations of body composition techniques allows for optimal application in clinical practice.

  20. PASCAL, Probabilistic Fracture Mechanics Analysis of Structural Components in Aging LWR

    International Nuclear Information System (INIS)

    Shibata, Katsuyuki; Onizawa, Kunio; Li, Yinsheng; Kato, Daisuke

    2005-01-01

    A - Description of program or function: PASCAL (PFM analysis of Structural Components in Aging LWR) is a PFM (Probabilistic Fracture Mechanics) code for evaluating the failure probability of aged pressure components. PASCAL has been developed as a part of the JAERI's research program on aging and structural integrity of LWR components, in order to respond to the increasing need of the probabilistic methodology in the regulation and inspection of nuclear components with the objective to provide a rational tool for the evaluation of the reliability and integrity of structural components. In order to improve the accuracy and reliability of the analysis code, some new fracture mechanics models or computational techniques are introduced considering the recent progress in the state of the art and performance of PC. Thus some new analysis models and original methodologies were introduced in PASCAL such as the elastic-plastic fracture criterion based on R6 method, a new crack extension model of semi-elliptical crack evaluation and so on. Moreover a function to evaluate the effect of embrittlement recovery by annealing of irradiated RPV is also introduced in the code based on the USNRC R.G. 1.162(1996). The code has been verified through various failure analysis results and international PTS round robin analysis ICAS which had been organized by the Principal Working Group 3 of OECD/NEA/CSNI. In order to attain a high usability, PASCAL Ver.1 with GUI provides an exclusive FEM pre-processor Pre-PASCAL for generating the input load transient data, a GUI system for generating the input data for PASCAL main processor of main solver and post-processor for output data. - Pre-PASCAL: Pre-PASCAL is an exclusive 3-D FEM pre-processor for generating the input transient data provided with 3 RPV mesh models and two simple specimen mesh models, i.e. CT and CCP. Almost the same input data format with that of PASCAL main processor is used. Output data of temperature and stress distribution

  1. Development of the Risk-Based Inspection Techniques and Pilot Plant Activities

    International Nuclear Information System (INIS)

    Phillips, J.H.

    1997-01-01

    Risk-based techniques have been developed for commercial nuclear power plants. System boundaries and success criteria is defined using the probabilistic risk analysis or probabilistic safety analysis developed to meet the individual plant evaluation. Final ranking of components is by a plant expert panel similar to the one developed for maintenance rule. Components are identified as being high risk-significant or low-risk significant. Maintenance and resources are focused on those components that have the highest risk-significance. The techniques have been developed and applied at a number of pilot plants. Results from the first risk-based inspection pilot plant indicates that safety due to pipe failure can be doubled while the inspection reduced to about 80% when compared with current inspection programs. The reduction in inspection reduces the person-rem exposure resulting in further increases in safety. These techniques have been documented in publication by the ASME CRTD

  2. Numerical analysis of magnetoelastic coupled buckling of fusion reactor components

    International Nuclear Information System (INIS)

    Demachi, K.; Yoshida, Y.; Miya, K.

    1994-01-01

    For a tokamak fusion reactor, it is one of the most important subjects to establish the structural design in which its components can stand for strong magnetic force induced by plasma disruption. A number of magnetostructural analysis of the fusion reactor components were done recently. However, in these researches the structural behavior was calculated based on the small deformation theory where the nonlinearity was neglected. But it is known that some kinds of structures easily exceed the geometrical nonlinearity. In this paper, the deflection and the magnetoelastic buckling load of fusion reactor components during plasma disruption were calculated

  3. Nuclear techniques for bulk and surface analysis of materials

    International Nuclear Information System (INIS)

    D'Agostino, M.D.; Kamykowski, E.A.; Kuehne, F.J.; Padawer, G.M.; Schneid, E.J.; Schulte, R.L.; Stauber, M.C.; Swanson, F.R.

    1978-01-01

    A review is presented summarizing several nondestructive bulk and surface analysis nuclear techniques developed in the Grumman Research Laboratories. Bulk analysis techniques include 14-MeV-neutron activation analysis and accelerator-based neutron radiography. The surface analysis techniques include resonant and non-resonant nuclear microprobes for the depth profile analysis of light elements (H, He, Li, Be, C, N, O and F) in the surface of materials. Emphasis is placed on the description and discussion of the unique nuclear microprobe analytical capacibilities of immediate importance to a number of current problems facing materials specialists. The resolution and contrast of neutron radiography was illustrated with an operating heat pipe system. The figure shows that the neutron radiograph has a resolution of better than 0.04 cm with sufficient contrast to indicate Freon 21 on the inner capillaries of the heat pipe and pooling of the liquid at the bottom. (T.G.)

  4. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  5. Evolution of optical fibre cabling components at CERN: Performance and technology trends analysis

    Science.gov (United States)

    Shoaie, Mohammad Amin; Meroli, Stefano; Machado, Simao; Ricci, Daniel

    2018-05-01

    CERN optical fibre infrastructure has been growing constantly over the past decade due to ever increasing connectivity demands. The provisioning plan and fibre installation of this vast laboratory is performed by Fibre Optics and Cabling Section at Engineering Department. In this paper we analyze the procurement data for essential fibre cabling components during a five-year interval to extract the existing trends and anticipate future directions. The analysis predicts high contribution of LC connector and an increasing usage of multi-fibre connectors. It is foreseen that single-mode fibres become the main fibre type for mid and long-range installations while air blowing would be the major installation technique. Performance assessment of various connectors shows that the expanded beam ferrule is favored for emerging on-board optical interconnections thanks to its scalable density and stable return-loss.

  6. Global sensitivity analysis of bogie dynamics with respect to suspension components

    Energy Technology Data Exchange (ETDEWEB)

    Mousavi Bideleh, Seyed Milad, E-mail: milad.mousavi@chalmers.se; Berbyuk, Viktor, E-mail: viktor.berbyuk@chalmers.se [Chalmers University of Technology, Department of Applied Mechanics (Sweden)

    2016-06-15

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  7. Global sensitivity analysis of bogie dynamics with respect to suspension components

    International Nuclear Information System (INIS)

    Mousavi Bideleh, Seyed Milad; Berbyuk, Viktor

    2016-01-01

    The effects of bogie primary and secondary suspension stiffness and damping components on the dynamics behavior of a high speed train are scrutinized based on the multiplicative dimensional reduction method (M-DRM). A one-car railway vehicle model is chosen for the analysis at two levels of the bogie suspension system: symmetric and asymmetric configurations. Several operational scenarios including straight and circular curved tracks are considered, and measurement data are used as the track irregularities in different directions. Ride comfort, safety, and wear objective functions are specified to evaluate the vehicle’s dynamics performance on the prescribed operational scenarios. In order to have an appropriate cut center for the sensitivity analysis, the genetic algorithm optimization routine is employed to optimize the primary and secondary suspension components in terms of wear and comfort, respectively. The global sensitivity indices are introduced and the Gaussian quadrature integrals are employed to evaluate the simplified sensitivity indices correlated to the objective functions. In each scenario, the most influential suspension components on bogie dynamics are recognized and a thorough analysis of the results is given. The outcomes of the current research provide informative data that can be beneficial in design and optimization of passive and active suspension components for high speed train bogies.

  8. Event tree analysis using artificial intelligence techniques

    International Nuclear Information System (INIS)

    Dixon, B.W.; Hinton, M.F.

    1985-01-01

    Artificial Intelligence (AI) techniques used in Expert Systems and Object Oriented Programming are discussed as they apply to Event Tree Analysis. A SeQUence IMPortance calculator, SQUIMP, is presented to demonstrate the implementation of these techniques. Benefits of using AI methods include ease of programming, efficiency of execution, and flexibility of application. The importance of an appropriate user interface is stressed. 5 figs

  9. Characterization of the volatile components in green tea by IRAE-HS-SPME/GC-MS combined with multivariate analysis.

    Science.gov (United States)

    Yang, Yan-Qin; Yin, Hong-Xu; Yuan, Hai-Bo; Jiang, Yong-Wen; Dong, Chun-Wang; Deng, Yu-Liang

    2018-01-01

    In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME) followed by gas chromatography-mass spectrometry (GC-MS) was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA) and hierarchical clustering analysis (HCA). Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP) values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA) allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties.

  10. Characterization of the volatile components in green tea by IRAE-HS-SPME/GC-MS combined with multivariate analysis.

    Directory of Open Access Journals (Sweden)

    Yan-Qin Yang

    Full Text Available In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME followed by gas chromatography-mass spectrometry (GC-MS was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA and hierarchical clustering analysis (HCA. Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties.

  11. Creative design-by-analysis solutions applied to high-temperature components

    International Nuclear Information System (INIS)

    Dhalla, A.K.

    1993-01-01

    Elevated temperature design has evolved over the last two decades from design-by-formula philosophy of the ASME Boiler and Pressure Vessel Code, Sections I and VIII (Division 1), to the design-by-analysis philosophy of Section III, Code Case N-47. The benefits of design-by-analysis procedures, which were developed under a US-DOE-sponsored high-temperature structural design (HTSD) program, are illustrated in the paper through five design examples taken from two U.S. liquid metal reactor (LMR) plants. Emphasis in the paper is placed upon the use of a detailed, nonlinear finite element analysis method to understand the structural response and to suggest design optimization so as to comply with Code Case N-47 criteria. A detailed analysis is cost-effective, if selectively used, to qualify an LMR component for service when long-lead-time structural forgings, procured based upon simplified preliminary analysis, do not meet the design criteria, or the operational loads are increased after the components have been fabricated. In the future, the overall costs of a detailed analysis will be reduced even further with the availability of finite element software used on workstations or PCs

  12. Priority of VHS Development Based in Potential Area using Principal Component Analysis

    Science.gov (United States)

    Meirawan, D.; Ana, A.; Saripudin, S.

    2018-02-01

    The current condition of VHS is still inadequate in quality, quantity and relevance. The purpose of this research is to analyse the development of VHS based on the development of regional potential by using principal component analysis (PCA) in Bandung, Indonesia. This study used descriptive qualitative data analysis using the principle of secondary data reduction component. The method used is Principal Component Analysis (PCA) analysis with Minitab Statistics Software tool. The results of this study indicate the value of the lowest requirement is a priority of the construction of development VHS with a program of majors in accordance with the development of regional potential. Based on the PCA score found that the main priority in the development of VHS in Bandung is in Saguling, which has the lowest PCA value of 416.92 in area 1, Cihampelas with the lowest PCA value in region 2 and Padalarang with the lowest PCA value.

  13. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    Science.gov (United States)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  14. The development of human behavior analysis techniques

    International Nuclear Information System (INIS)

    Lee, Jung Woon; Lee, Yong Hee; Park, Geun Ok; Cheon, Se Woo; Suh, Sang Moon; Oh, In Suk; Lee, Hyun Chul; Park, Jae Chang.

    1997-07-01

    In this project, which is to study on man-machine interaction in Korean nuclear power plants, we developed SACOM (Simulation Analyzer with a Cognitive Operator Model), a tool for the assessment of task performance in the control rooms using software simulation, and also develop human error analysis and application techniques. SACOM was developed to assess operator's physical workload, workload in information navigation at VDU workstations, and cognitive workload in procedural tasks. We developed trip analysis system including a procedure based on man-machine interaction analysis system including a procedure based on man-machine interaction analysis and a classification system. We analyzed a total of 277 trips occurred from 1978 to 1994 to produce trip summary information, and for 79 cases induced by human errors time-lined man-machine interactions. The INSTEC, a database system of our analysis results, was developed. The MARSTEC, a multimedia authoring and representation system for trip information, was also developed, and techniques for human error detection in human factors experiments were established. (author). 121 refs., 38 tabs., 52 figs

  15. The development of human behavior analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Lee, Yong Hee; Park, Geun Ok; Cheon, Se Woo; Suh, Sang Moon; Oh, In Suk; Lee, Hyun Chul; Park, Jae Chang

    1997-07-01

    In this project, which is to study on man-machine interaction in Korean nuclear power plants, we developed SACOM (Simulation Analyzer with a Cognitive Operator Model), a tool for the assessment of task performance in the control rooms using software simulation, and also develop human error analysis and application techniques. SACOM was developed to assess operator`s physical workload, workload in information navigation at VDU workstations, and cognitive workload in procedural tasks. We developed trip analysis system including a procedure based on man-machine interaction analysis system including a procedure based on man-machine interaction analysis and a classification system. We analyzed a total of 277 trips occurred from 1978 to 1994 to produce trip summary information, and for 79 cases induced by human errors time-lined man-machine interactions. The INSTEC, a database system of our analysis results, was developed. The MARSTEC, a multimedia authoring and representation system for trip information, was also developed, and techniques for human error detection in human factors experiments were established. (author). 121 refs., 38 tabs., 52 figs.

  16. Spectroscopic analysis technique for arc-welding process control

    Science.gov (United States)

    Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel

    2005-09-01

    The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.

  17. Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data.

    Science.gov (United States)

    Medina, José M; Díaz, José A; Vukusic, Pete

    2015-04-20

    Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.

  18. The Intercultural Component in Textbooks for Teaching a Service Technical Writing Course

    Science.gov (United States)

    Matveeva, Natalia

    2007-01-01

    This research article investigates new developments in the representation of the intercultural component in textbooks for a service technical writing course. Through textual analysis, using quantitative and qualitative techniques, I report discourse analysis of 15 technical writing textbooks published during 1993-2006. The theoretical and…

  19. Solid phase microextraction as a reliable alternative to conventional extraction techniques to evaluate the pattern of hydrolytically released components in Vitis vinifera L. grapes.

    Science.gov (United States)

    Perestrelo, Rosa; Caldeira, Michael; Câmara, José S

    2012-06-15

    In present research, headspace solid-phase microextraction (HS-SPME) followed by gas chromatography-mass spectrometry (GC-qMS), was evaluated as a reliable and improved alternative to the commonly used liquid-liquid extraction (LLE) technique for the establishment of the pattern of hydrolytically released components of 7 Vitis vinifera L. grape varieties, commonly used to produce the world-famous Madeira wine. Since there is no data available on their glycosidic fractions, at a first step, two hydrolyse procedures, acid and enzymatic, were carried out using Boal grapes as matrix. Several parameters susceptible of influencing the hydrolytic process were studied. The best results, expressed as GC peak area, number of identified components and reproducibility, were obtained using ProZym M with b-glucosidase activity at 35°C for 42h. For the extraction of hydrolytically released components, HS-SPME technique was evaluated as a reliable and improved alternative to the conventional extraction technique, LLE (ethyl acetate). HS-SPME using DVB/CAR/PDMS as coating fiber displayed an extraction capacity two fold higher than LLE (ethyl acetate). The hydrolyzed fraction was mainly characterized by the occurrence of aliphatic and aromatic alcohols, followed by acids, esters, carbonyl compounds, terpenoids, and volatile phenols. Concerning to terpenoids its contribution to the total hydrolyzed fraction is highest for Malvasia Cândida (23%) and Malvasia Roxa (13%), and their presence according previous studies, even at low concentration, is important from a sensorial point of view (can impart floral notes to the wines), due to their low odor threshold (μg/L). According to the obtained data by principal component analysis (PCA), the sensorial properties of Madeira wines produced by Malvasia Cândida and Malvasia Roxa could be improved by hydrolysis procedure, since their hydrolyzed fraction is mainly characterized by terpenoids (e.g. linalool, geraniol) which are responsible

  20. Summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.

    2004-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA of Korean nuclear power plants. We have performed a study to develop the component reliability DB and S/W for component reliability analysis. Based on the system, we had have collected the component operation data and failure/repair data during plant operation data to 1998/2000 for YGN 3,4/UCN 3,4 respectively. Recently, we have upgraded the database by collecting additional data by 2002 for Korean standard nuclear power plants and performed component reliability analysis and Bayesian analysis again. In this paper, we supply the summary of component reliability data for probabilistic safety analysis of Korean standard nuclear power plant and describe the plant specific characteristics compared to the generic data

  1. Techniques and Applications of Urban Data Analysis

    KAUST Repository

    AlHalawani, Sawsan

    2016-01-01

    Digitization and characterization of urban spaces are essential components as we move to an ever-growing ’always connected’ world. Accurate analysis of such digital urban spaces has become more important as we continue to get spatial and social

  2. Shaking table test and dynamic response analysis of 3-D component base isolation system using multi-layer rubber bearings and coil springs

    Energy Technology Data Exchange (ETDEWEB)

    Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Fujimoto, Shigeru [Toshiba Corp., Tokyo (Japan)

    2001-06-01

    Introduction of the base isolation technique into the seismic design of nuclear power plant components as well as buildings has been expected as one of the effective countermeasure to reduce the seismic force applied to components. A research program on the base isolation of nuclear components has been carried out at the Japan Atomic Energy Research Institute (JAERI) since 1991. A methodology and a computer code (EBISA: Equipment Base Isolation System Analysis) for evaluating the failure frequency of the nuclear component with the base isolation were developed. In addition, a test program, which is concerned with the above development, aiming at improvement of failure frequency analysis models in the code has been conducted since 1996 to investigate the dynamic behavior and to verify the effectiveness of component base isolation systems. Two base isolation test systems with different characteristics were fabricated and static and dynamic characteristics were measured by static loading and free vibration tests. One which consists of ball bearings and air springs was installed on the test bed to observe the dynamic response under natural earthquake motion. The effect of base isolation system has been observed under several earthquakes. Three-dimensional response and effect of base isolation of another system using multi-layer-rubber-bearings and coil springs has been investigated under various large earthquake motions by shaking table test. This report describes the results of the shaking table tests and dynamic response analysis. (author)

  3. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  4. 10th Australian conference on nuclear techniques of analysis. Proceedings

    International Nuclear Information System (INIS)

    1998-01-01

    These proceedings contains abstracts and extended abstracts of 80 lectures and posters presented at the 10th Australian conference on nuclear techniques of analysis hosted by the Australian National University in Canberra, Australia from 24-26 of November 1997. The conference was divided into sessions on the following topics : ion beam analysis and its applications; surface science; novel nuclear techniques of analysis, characterization of thin films, electronic and optoelectronic material formed by ion implantation, nanometre science and technology, plasma science and technology. A special session was dedicated to new nuclear techniques of analysis, future trends and developments. Separate abstracts were prepared for the individual presentation included in this volume

  5. 10th Australian conference on nuclear techniques of analysis. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    These proceedings contains abstracts and extended abstracts of 80 lectures and posters presented at the 10th Australian conference on nuclear techniques of analysis hosted by the Australian National University in Canberra, Australia from 24-26 of November 1997. The conference was divided into sessions on the following topics : ion beam analysis and its applications; surface science; novel nuclear techniques of analysis, characterization of thin films, electronic and optoelectronic material formed by ion implantation, nanometre science and technology, plasma science and technology. A special session was dedicated to new nuclear techniques of analysis, future trends and developments. Separate abstracts were prepared for the individual presentation included in this volume.

  6. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  7. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  8. Integrity evaluation of power plant components by fracture mechanics and related techniques

    International Nuclear Information System (INIS)

    Mukherjee, B.; Vanderglas, M.L.; Davies, P.H.

    1982-01-01

    Power plant components can be subject to unexpected failures with serious consequences, unless careful attention is paid to minute crack defects and their possible growth. The Linear Elastic Fracture Mechanics approach to structural integrity evaluation, as it appears in the ASME Code, is discussed. Projects related to material data generation and the development of structural analysis methods to make the above method usable are described. Several integrity-related questions outside the scope of the Code guidelines are documented, concluding with comments on possible future developments

  9. Magnetic Flux Leakage and Principal Component Analysis for metal loss approximation in a pipeline

    International Nuclear Information System (INIS)

    Ruiz, M; Mujica, L E; Quintero, M; Florez, J; Quintero, S

    2015-01-01

    Safety and reliability of hydrocarbon transportation pipelines represent a critical aspect for the Oil an Gas industry. Pipeline failures caused by corrosion, external agents, among others, can develop leaks or even rupture, which can negatively impact on population, natural environment, infrastructure and economy. It is imperative to have accurate inspection tools traveling through the pipeline to diagnose the integrity. In this way, over the last few years, different techniques under the concept of structural health monitoring (SHM) have continuously been in development.This work is based on a hybrid methodology that combines the Magnetic Flux Leakage (MFL) and Principal Components Analysis (PCA) approaches. The MFL technique induces a magnetic field in the pipeline's walls. The data are recorded by sensors measuring leakage magnetic field in segments with loss of metal, such as cracking, corrosion, among others. The data provide information of a pipeline with 15 years of operation approximately, which transports gas, has a diameter of 20 inches and a total length of 110 km (with several changes in the topography). On the other hand, PCA is a well-known technique that compresses the information and extracts the most relevant information facilitating the detection of damage in several structures. At this point, the goal of this work is to detect and localize critical loss of metal of a pipeline that are currently working. (paper)

  10. Magnetic Flux Leakage and Principal Component Analysis for metal loss approximation in a pipeline

    Science.gov (United States)

    Ruiz, M.; Mujica, L. E.; Quintero, M.; Florez, J.; Quintero, S.

    2015-07-01

    Safety and reliability of hydrocarbon transportation pipelines represent a critical aspect for the Oil an Gas industry. Pipeline failures caused by corrosion, external agents, among others, can develop leaks or even rupture, which can negatively impact on population, natural environment, infrastructure and economy. It is imperative to have accurate inspection tools traveling through the pipeline to diagnose the integrity. In this way, over the last few years, different techniques under the concept of structural health monitoring (SHM) have continuously been in development. This work is based on a hybrid methodology that combines the Magnetic Flux Leakage (MFL) and Principal Components Analysis (PCA) approaches. The MFL technique induces a magnetic field in the pipeline's walls. The data are recorded by sensors measuring leakage magnetic field in segments with loss of metal, such as cracking, corrosion, among others. The data provide information of a pipeline with 15 years of operation approximately, which transports gas, has a diameter of 20 inches and a total length of 110 km (with several changes in the topography). On the other hand, PCA is a well-known technique that compresses the information and extracts the most relevant information facilitating the detection of damage in several structures. At this point, the goal of this work is to detect and localize critical loss of metal of a pipeline that are currently working.

  11. Radionuclide X-ray fluorescence analysis of components of the environment

    International Nuclear Information System (INIS)

    Toelgyessy, J.; Havranek, E.; Dejmkova, E.

    1983-12-01

    The physical foundations and methodology are described of radionuclide X-ray fluorescence analysis. The sources are listed of air, water and soil pollution, and the transfer of impurities into biological materials is described. A detailed description is presented of the sampling of air, soil and biological materials and their preparation for analysis. Greatest attention is devoted to radionuclide X-ray fluorescence analysis of the components of the environment. (ES)

  12. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  13. Review and classification of variability analysis techniques with clinical applications

    Science.gov (United States)

    2011-01-01

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis. PMID:21985357

  14. Review and classification of variability analysis techniques with clinical applications.

    Science.gov (United States)

    Bravi, Andrea; Longtin, André; Seely, Andrew J E

    2011-10-10

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis.

  15. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  16. Fast principal component analysis for stacking seismic data

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  17. Quantitative Analysis of TDLUs using Adaptive Morphological Shape Techniques.

    Science.gov (United States)

    Rosebrock, Adrian; Caban, Jesus J; Figueroa, Jonine; Gierach, Gretchen; Linville, Laura; Hewitt, Stephen; Sherman, Mark

    2013-03-29

    Within the complex branching system of the breast, terminal duct lobular units (TDLUs) are the anatomical location where most cancer originates. With aging, TDLUs undergo physiological involution, reflected in a loss of structural components (acini) and a reduction in total number. Data suggest that women undergoing benign breast biopsies that do not show age appropriate involution are at increased risk of developing breast cancer. To date, TDLU assessments have generally been made by qualitative visual assessment, rather than by objective quantitative analysis. This paper introduces a technique to automatically estimate a set of quantitative measurements and use those variables to more objectively describe and classify TDLUs. To validate the accuracy of our system, we compared the computer-based morphological properties of 51 TDLUs in breast tissues donated for research by volunteers in the Susan G. Komen Tissue Bank and compared results to those of a pathologist, demonstrating 70% agreement. Secondly, in order to show that our method is applicable to a wider range of datasets, we analyzed 52 TDLUs from biopsies performed for clinical indications in the National Cancer Institute's Breast Radiology Evaluation and Study of Tissues (BREAST) Stamp Project and obtained 82% correlation with visual assessment. Lastly, we demonstrate the ability to uncover novel measures when researching the structural properties of the acini by applying machine learning and clustering techniques. Through our study we found that while the number of acini per TDLU increases exponentially with the TDLU diameter, the average elongation and roundness remain constant.

  18. Path and correlation analysis of perennial ryegrass (Lolium perenne L.) seed yield components

    DEFF Research Database (Denmark)

    Abel, Simon; Gislum, René; Boelt, Birte

    2017-01-01

    Maximum perennial ryegrass seed production potential is substantially greater than harvested yields with harvested yields representing only 20% of calculated potential. Similar to wheat, maize and other agriculturally important crops, seed yield is highly dependent on a number of interacting seed...... yield components. This research was performed to apply and describe path analysis of perennial ryegrass seed yield components in relation to harvested seed yields. Utilising extensive yield components which included subdividing reproductive inflorescences into five size categories, path analysis...... was undertaken assuming a unidirectional causal-admissible relationship between seed yield components and harvested seed yield in six commercial seed production fields. Both spikelets per inflorescence and florets per spikelet had a significant (p seed yield; however, total...

  19. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  20. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  1. Representation for dialect recognition using topographic independent component analysis

    Science.gov (United States)

    Wei, Qu

    2004-10-01

    In dialect speech recognition, the feature of tone in one dialect is subject to changes in pitch frequency as well as the length of tone. It is beneficial for the recognition if a representation can be derived to account for the frequency and length changes of tone in an effective and meaningful way. In this paper, we propose a method for learning such a representation from a set of unlabeled speech sentences containing the features of the dialect changed from various pitch frequencies and time length. Topographic independent component analysis (TICA) is applied for the unsupervised learning to produce an emergent result that is a topographic matrix made up of basis components. The dialect speech is topographic in the following sense: the basis components as the units of the speech are ordered in the feature matrix such that components of one dialect are grouped in one axis and changes in time windows are accounted for in the other axis. This provides a meaningful set of basis vectors that may be used to construct dialect subspaces for dialect speech recognition.

  2. Sensitivity analysis on the component cooling system of the Angra 1 NPP

    International Nuclear Information System (INIS)

    Castro Silva, Luiz Euripedes Massiere de

    1995-01-01

    The component cooling system has been studied within the scope of the Probabilistic Safety Analysis of the Angra I NPP in order to assure that the proposed modelling suits as close as possible the functioning system and its availability aspects. In such a way a sensitivity analysis was performed on the equivalence between the operating modes of the component cooling system and its results show the fitness of the model. (author). 4 refs, 3 figs, 3 tabs

  3. Artificial intelligence techniques used in respiratory sound analysis--a systematic review.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian

    2014-02-01

    Artificial intelligence (AI) has recently been established as an alternative method to many conventional methods. The implementation of AI techniques for respiratory sound analysis can assist medical professionals in the diagnosis of lung pathologies. This article highlights the importance of AI techniques in the implementation of computer-based respiratory sound analysis. Articles on computer-based respiratory sound analysis using AI techniques were identified by searches conducted on various electronic resources, such as the IEEE, Springer, Elsevier, PubMed, and ACM digital library databases. Brief descriptions of the types of respiratory sounds and their respective characteristics are provided. We then analyzed each of the previous studies to determine the specific respiratory sounds/pathology analyzed, the number of subjects, the signal processing method used, the AI techniques used, and the performance of the AI technique used in the analysis of respiratory sounds. A detailed description of each of these studies is provided. In conclusion, this article provides recommendations for further advancements in respiratory sound analysis.

  4. A methodological comparison of customer service analysis techniques

    Science.gov (United States)

    James Absher; Alan Graefe; Robert Burns

    2003-01-01

    Techniques used to analyze customer service data need to be studied. Two primary analysis protocols, importance-performance analysis (IP) and gap score analysis (GA), are compared in a side-by-side comparison using data from two major customer service research projects. A central concern is what, if any, conclusion might be different due solely to the analysis...

  5. Reliability-based sensitivity of mechanical components with arbitrary distribution parameters

    International Nuclear Information System (INIS)

    Zhang, Yi Min; Yang, Zhou; Wen, Bang Chun; He, Xiang Dong; Liu, Qiaoling

    2010-01-01

    This paper presents a reliability-based sensitivity method for mechanical components with arbitrary distribution parameters. Techniques from the perturbation method, the Edgeworth series, the reliability-based design theory, and the sensitivity analysis approach were employed directly to calculate the reliability-based sensitivity of mechanical components on the condition that the first four moments of the original random variables are known. The reliability-based sensitivity information of the mechanical components can be accurately and quickly obtained using a practical computer program. The effects of the design parameters on the reliability of mechanical components were studied. The method presented in this paper provides the theoretic basis for the reliability-based design of mechanical components

  6. Automated thermal mapping techniques using chromatic image analysis

    Science.gov (United States)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  7. Reliability Analysis of 6-Component Star Markov Repairable System with Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Liying Wang

    2017-01-01

    Full Text Available Star repairable systems with spatial dependence consist of a center component and several peripheral components. The peripheral components are arranged around the center component, and the performance of each component depends on its spatial “neighbors.” Vector-Markov process is adapted to describe the performance of the system. The state space and transition rate matrix corresponding to the 6-component star Markov repairable system with spatial dependence are presented via probability analysis method. Several reliability indices, such as the availability, the probabilities of visiting the safety, the degradation, the alert, and the failed state sets, are obtained by Laplace transform method and a numerical example is provided to illustrate the results.

  8. Nonlinear Ultrasonic Techniques to Monitor Radiation Damage in RPV and Internal Components

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, Laurence [Georgia Inst. of Technology, Atlanta, GA (United States); Kim, Jin-Yeon [Georgia Inst. of Technology, Atlanta, GA (United States); Qu, Jisnmin [Northwestern Univ., Evanston, IL (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wall, Joe [Electric Power Research Inst. (EPRI), Knoxville, TN (United States)

    2015-11-02

    The objective of this research is to demonstrate that nonlinear ultrasonics (NLU) can be used to directly and quantitatively measure the remaining life in radiation damaged reactor pressure vessel (RPV) and internal components. Specific damage types to be monitored are irradiation embrittlement and irradiation assisted stress corrosion cracking (IASCC). Our vision is to develop a technique that allows operators to assess damage by making a limited number of NLU measurements in strategically selected critical reactor components during regularly scheduled outages. This measured data can then be used to determine the current condition of these key components, from which remaining useful life can be predicted. Methods to unambiguously characterize radiation related damage in reactor internals and RPVs remain elusive. NLU technology has demonstrated great potential to be used as a material sensor – a sensor that can continuously monitor a material’s damage state. The physical effect being monitored by NLU is the generation of higher harmonic frequencies in an initially monochromatic ultrasonic wave. The degree of nonlinearity is quantified with the acoustic nonlinearity parameter, β, which is an absolute, measurable material constant. Recent research has demonstrated that nonlinear ultrasound can be used to characterize material state and changes in microscale characteristics such as internal stress states, precipitate formation and dislocation densities. Radiation damage reduces the fracture toughness of RPV steels and internals, and can leave them susceptible to IASCC, which may in turn limit the lifetimes of some operating reactors. The ability to characterize radiation damage in the RPV and internals will enable nuclear operators to set operation time thresholds for vessels and prescribe and schedule replacement activities for core internals. Such a capability will allow a more clear definition of reactor safety margins. The research consists of three tasks: (1

  9. Nonlinear Ultrasonic Techniques to Monitor Radiation Damage in RPV and Internal Components

    International Nuclear Information System (INIS)

    Jacobs, Laurence; Kim, Jin-Yeon; Qu, Jisnmin; Ramuhalli, Pradeep; Wall, Joe

    2015-01-01

    The objective of this research is to demonstrate that nonlinear ultrasonics (NLU) can be used to directly and quantitatively measure the remaining life in radiation damaged reactor pressure vessel (RPV) and internal components. Specific damage types to be monitored are irradiation embrittlement and irradiation assisted stress corrosion cracking (IASCC). Our vision is to develop a technique that allows operators to assess damage by making a limited number of NLU measurements in strategically selected critical reactor components during regularly scheduled outages. This measured data can then be used to determine the current condition of these key components, from which remaining useful life can be predicted. Methods to unambiguously characterize radiation related damage in reactor internals and RPVs remain elusive. NLU technology has demonstrated great potential to be used as a material sensor - a sensor that can continuously monitor a material's damage state. The physical effect being monitored by NLU is the generation of higher harmonic frequencies in an initially monochromatic ultrasonic wave. The degree of nonlinearity is quantified with the acoustic nonlinearity parameter, β, which is an absolute, measurable material constant. Recent research has demonstrated that nonlinear ultrasound can be used to characterize material state and changes in microscale characteristics such as internal stress states, precipitate formation and dislocation densities. Radiation damage reduces the fracture toughness of RPV steels and internals, and can leave them susceptible to IASCC, which may in turn limit the lifetimes of some operating reactors. The ability to characterize radiation damage in the RPV and internals will enable nuclear operators to set operation time thresholds for vessels and prescribe and schedule replacement activities for core internals. Such a capability will allow a more clear definition of reactor safety margins. The research consists of three tasks

  10. Modular techniques for dynamic fault-tree analysis

    Science.gov (United States)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  11. Optimization benefits analysis in production process of fabrication components

    Science.gov (United States)

    Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.

    2017-12-01

    The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.

  12. Key-space analysis of double random phase encryption technique

    Science.gov (United States)

    Monaghan, David S.; Gopinathan, Unnikrishnan; Naughton, Thomas J.; Sheridan, John T.

    2007-09-01

    We perform a numerical analysis on the double random phase encryption/decryption technique. The key-space of an encryption technique is the set of possible keys that can be used to encode data using that technique. In the case of a strong encryption scheme, many keys must be tried in any brute-force attack on that technique. Traditionally, designers of optical image encryption systems demonstrate only how a small number of arbitrary keys cannot decrypt a chosen encrypted image in their system. However, this type of demonstration does not discuss the properties of the key-space nor refute the feasibility of an efficient brute-force attack. To clarify these issues we present a key-space analysis of the technique. For a range of problem instances we plot the distribution of decryption errors in the key-space indicating the lack of feasibility of a simple brute-force attack.

  13. An assessment of independent component analysis for detection of military targets from hyperspectral images

    Science.gov (United States)

    Tiwari, K. C.; Arora, M. K.; Singh, D.

    2011-10-01

    Hyperspectral data acquired over hundreds of narrow contiguous wavelength bands are extremely suitable for target detection due to their high spectral resolution. Though spectral response of every material is expected to be unique, but in practice, it exhibits variations, which is known as spectral variability. Most target detection algorithms depend on spectral modelling using a priori available target spectra In practice, target spectra is, however, seldom available a priori. Independent component analysis (ICA) is a new evolving technique that aims at finding out components which are statistically independent or as independent as possible. The technique therefore has the potential of being used for target detection applications. A assessment of target detection from hyperspectral images using ICA and other algorithms based on spectral modelling may be of immense interest, since ICA does not require a priori target information. The aim of this paper is, thus, to assess the potential of ICA based algorithm vis a vis other prevailing algorithms for military target detection. Four spectral matching algorithms namely Orthogonal Subspace Projection (OSP), Constrained Energy Minimisation (CEM), Spectral Angle Mapper (SAM) and Spectral Correlation Mapper (SCM), and four anomaly detection algorithms namely OSP anomaly detector (OSPAD), Reed-Xiaoli anomaly detector (RXD), Uniform Target Detector (UTD) and a combination of Reed-Xiaoli anomaly detector and Uniform Target Detector (RXD-UTD) were considered. The experiments were conducted using a set of synthetic and AVIRIS hyperspectral images containing aircrafts as military targets. A comparison of true positive and false positive rates of target detections obtained from ICA and other algorithms plotted on a receiver operating curves (ROC) space indicates the superior performance of the ICA over other algorithms.

  14. Inspection of disposal canisters components

    International Nuclear Information System (INIS)

    Pitkaenen, J.

    2013-12-01

    This report presents the inspection techniques of disposal canister components. Manufacturing methods and a description of the defects related to different manufacturing methods are described briefly. The defect types form a basis for the design of non-destructive testing because the defect types, which occur in the inspected components, affect to choice of inspection methods. The canister components are to nodular cast iron insert, steel lid, lid screw, metal gasket, copper tube with integrated or separate bottom, and copper lid. The inspection of copper material is challenging due to the anisotropic properties of the material and local changes in the grain size of the copper material. The cast iron insert has some acoustical material property variation (attenuation, velocity changes, scattering properties), which make the ultrasonic inspection demanding from calibration point of view. Mainly three different methods are used for inspection. Ultrasonic testing technique is used for inspection of volume, eddy current technique, for copper components only, and visual testing technique are used for inspection of the surface and near surface area

  15. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  16. Determination of the archaeological origin of ceramic fragments characterized by neutron activation analysis, by means of the application of multivariable statistical analysis techniques

    International Nuclear Information System (INIS)

    Almazan T, M. G.; Jimenez R, M.; Monroy G, F.; Tenorio, D.; Rodriguez G, N. L.

    2009-01-01

    The elementary composition of archaeological ceramic fragments obtained during the explorations in San Miguel Ixtapan, Mexico State, was determined by the neutron activation analysis technique. The samples irradiation was realized in the research reactor TRIGA Mark III with a neutrons flow of 1·10 13 n·cm -2 ·s -1 . The irradiation time was of 2 hours. Previous to the acquisition of the gamma rays spectrum the samples were allowed to decay from 12 to 14 days. The analyzed elements were: Nd, Ce, Lu, Eu, Yb, Pa(Th), Tb, La, Cr, Hf, Sc, Co, Fe, Cs, Rb. The statistical treatment of the data, consistent in the group analysis and the main components analysis allowed to identify three different origins of the archaeological ceramic, designated as: local, foreign and regional. (Author)

  17. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  18. Characterization of aluminum/steel components from recycled swarf using the powder metallurgy as technique

    International Nuclear Information System (INIS)

    Souza, V.E.S.; Masieiro, F.R.S.; Lourenco, J.M.; Felipe, R.C.T.S.

    2009-01-01

    Full text: The powder metallurgy process consists to produce metallic or ceramic components through pressure in a powder mass. These components will be submitted to a sintering temperature in order to consolidate them and then improve their mechanical proprieties. The industry is responsible for the swarf generation from different manufacture process. This paper has main goal the reutilization of aluminum and steel swarf using the powder metallurgy as technique. The methodology used in this work consists to compact Al 6060 plus steel SAE 1045 as reinforce material at 250MPa, 400MPa and 600MPa. The composition about these compacted will be 30%, 40%, 50% of steel into aluminum matrix. In this way will be analyze the hardness as function of the compressibility and quantity of steel. The samples will be processed at 500°C during 45 minutes using a resistive furnace in a hydrogen atmosphere. Micrographs of the sintered samples will be obtained by using a Scanning Electron Microscope and Optic Microscope. X-rays diffraction will be also used to characterize the phases found to due diffusivity between the steel and aluminum. (author)

  19. Development of fault diagnostic technique using reactor noise analysis

    International Nuclear Information System (INIS)

    Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B.

    1999-04-01

    The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)

  20. Development of fault diagnostic technique using reactor noise analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B

    1999-04-01

    The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)

  1. Measurement of activated rCBF by the 133Xe inhalation technique: a comparison of total versus partial curve analysis

    International Nuclear Information System (INIS)

    Leli, D.A.; Katholi, C.R.; Hazelrig, J.B.; Falgout, J.C.; Hannay, H.J.; Wilson, E.M.; Wills, E.L.; Halsey, J.H. Jr.

    1985-01-01

    An initial assessment of the differential sensitivity of total versus partial curve analysis in estimating task related focal changes in cortical blood flow measured by the 133 Xe inhalation technique was accomplished by comparing the patterns during the performance of two sensorimotor tasks by normal subjects. The validity of these patterns was evaluated by comparing them to the activation patterns expected from activation studies with the intra-arterial technique and the patterns expected from neuropsychological research literature. Subjects were 10 young adult nonsmoking healthy male volunteers. They were administered two tasks having identical sensory and cognitive components but different response requirements (oral versus manual). The regional activation patterns produced by the tasks varied with the method of curve analysis. The activation produced by the two tasks was very similar to that predicted from the research literature only for total curve analysis. To the extent that the predictions are correct, these data suggest that the 133 Xe inhalation technique is more sensitive to regional flow changes when flow parameters are estimated from the total head curve. The utility of the total head curve analysis will be strengthened if similar sensitivity is demonstrated in future studies assessing normal subjects and patients with neurological and psychiatric disorders

  2. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  3. Polyspectral signal analysis techniques for condition based maintenance of helicopter drive-train system

    Science.gov (United States)

    Hassan Mohammed, Mohammed Ahmed

    For an efficient maintenance of a diverse fleet of air- and rotorcraft, effective condition based maintenance (CBM) must be established based on rotating components monitored vibration signals. In this dissertation, we present theory and applications of polyspectral signal processing techniques for condition monitoring of critical components in the AH-64D helicopter tail rotor drive train system. Currently available vibration-monitoring tools are mostly built around auto- and cross-power spectral analysis which have limited performance in detecting frequency correlations higher than second order. Studying higher order correlations and their Fourier transforms, higher order spectra, provides more information about the vibration signals which helps in building more accurate diagnostic models of the mechanical system. Based on higher order spectral analysis, different signal processing techniques are developed to assess health conditions of different critical rotating-components in the AH-64D helicopter drive-train. Based on cross-bispectrum, quadratic nonlinear transfer function is presented to model second order nonlinearity in a drive-shaft running between the two hanger bearings. Then, quadratic-nonlinearity coupling coefficient between frequency harmonics of the rotating shaft is used as condition metric to study different seeded shaft faults compared to baseline case, namely: shaft misalignment, shaft imbalance, and combination of shaft misalignment and imbalance. The proposed quadratic-nonlinearity metric shows better capabilities in distinguishing the four studied shaft settings than the conventional linear coupling based on cross-power spectrum. We also develop a new concept of Quadratic-Nonlinearity Power-Index spectrum, QNLPI(f), that can be used in signal detection and classification, based on bicoherence spectrum. The proposed QNLPI(f) is derived as a projection of the three-dimensional bicoherence spectrum into two-dimensional spectrum that

  4. [Principal component analysis and cluster analysis of inorganic elements in sea cucumber Apostichopus japonicus].

    Science.gov (United States)

    Liu, Xiao-Fang; Xue, Chang-Hu; Wang, Yu-Ming; Li, Zhao-Jie; Xue, Yong; Xu, Jie

    2011-11-01

    The present study is to investigate the feasibility of multi-elements analysis in determination of the geographical origin of sea cucumber Apostichopus japonicus, and to make choice of the effective tracers in sea cucumber Apostichopus japonicus geographical origin assessment. The content of the elements such as Al, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Mo, Cd, Hg and Pb in sea cucumber Apostichopus japonicus samples from seven places of geographical origin were determined by means of ICP-MS. The results were used for the development of elements database. Cluster analysis(CA) and principal component analysis (PCA) were applied to differentiate the sea cucumber Apostichopus japonicus geographical origin. Three principal components which accounted for over 89% of the total variance were extracted from the standardized data. The results of Q-type cluster analysis showed that the 26 samples could be clustered reasonably into five groups, the classification results were significantly associated with the marine distribution of the sea cucumber Apostichopus japonicus samples. The CA and PCA were the effective methods for elements analysis of sea cucumber Apostichopus japonicus samples. The content of the mineral elements in sea cucumber Apostichopus japonicus samples was good chemical descriptors for differentiating their geographical origins.

  5. Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.

    Science.gov (United States)

    Zhang, Sheng; Li, Chiang-Shan R

    2017-11-01

    As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.

  6. Analysis for a two-dissimilar-component cold standby repairable system with repair priority

    International Nuclear Information System (INIS)

    Leung, Kit Nam Francis; Zhang Yuanlin; Lai, Kin Keung

    2011-01-01

    In this paper, a cold standby repairable system consisting of two dissimilar components and one repairman is studied. Assume that working time distributions and repair time distributions of the two components are both exponential, and Component 1 has repair priority when both components are broken down. After repair, Component 1 follows a geometric process repair while Component 2 obeys a perfect repair. Under these assumptions, using the perfect repair model, the geometric process repair model and the supplementary variable technique, we not only study some important reliability indices, but also consider a replacement policy T, under which the system is replaced when the working age of Component 1 reaches T. Our problem is to determine an optimal policy T* such that the long-run average loss per unit time (i.e. average loss rate) of the system is minimized. The explicit expression for the average loss rate of the system is derived, and the corresponding optimal replacement policy T* can be found numerically. Finally, a numerical example for replacement policy T is given to illustrate some theoretical results and the model's applicability. - Highlights: → A two-dissimilar-component cold standby system with repair priority is formulated. → The successive up/repair times of Component 1 form a decreasing/increasing geometric process. → Not only some reliability indices but also a replacement policy are studied.

  7. Characterization and Discrimination of Gram-Positive Bacteria Using Raman Spectroscopy with the Aid of Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Alia Colniță

    2017-09-01

    Full Text Available Raman scattering and its particular effect, surface-enhanced Raman scattering (SERS, are whole-organism fingerprinting spectroscopic techniques that gain more and more popularity in bacterial detection. In this work, two relevant Gram-positive bacteria species, Lactobacillus casei (L. casei and Listeria monocytogenes (L. monocytogenes were characterized based on their Raman and SERS spectral fingerprints. The SERS spectra were used to identify the biochemical structures of the bacterial cell wall. Two synthesis methods of the SERS-active nanomaterials were used and the recorded spectra were analyzed. L. casei and L. monocytogenes were successfully discriminated by applying Principal Component Analysis (PCA to their specific spectral data.

  8. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  9. Sensitivity Analysis on Elbow Piping Components in Seismically Isolated NPP under Seismic Loading

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Hee Kun; Hahm, Dae Gi; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of); Jeon, Bub Gyu; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    In this study, the FE model is verified using specimen test results and simulation with parameter variations are conducted. Effective parameters will randomly sampled and used as input values for simulations to be applied to the fragility analysis. pipelines are representative of them because they could undergo larger displacements when they are supported on both isolated and non-isolated structures simultaneously. Especially elbows are critical components of pipes under severed loading conditions such as earthquake action because strain is accumulated on them during the repeated bending of the pipe. Therefore, seismic performance of pipe elbow components should be examined thoroughly based on the fragility analysis. Fragility assessment of interface pipe should take different sources of uncertainty into account. However, selection of important sources and repeated tests with many random input values are very time consuming and expensive, so numerical analysis is commonly used. In the present study, finite element (FE) model of elbow component will be validated using the dynamic test results of elbow components. Using the verified model, sensitivity analysis will be implemented as a preliminary process of seismic fragility of piping system. Several important input parameters are selected and how the uncertainty of them are apportioned to the uncertainty of the elbow response is to be studied. Piping elbows are critical components under cyclic loading conditions as they are subjected large displacement. In a seismically isolated NPP, seismic capacity of piping system should be evaluated with caution. Seismic fragility assessment preliminarily needs parameter sensitivity analysis about the output of interest with different input parameter values.

  10. Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoming Xu

    2017-01-01

    Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.

  11. VENUS-III two-dimensional multi-component thermal hydraulic techniques

    International Nuclear Information System (INIS)

    Weber, D.P.

    1979-01-01

    In recent analyses of the initiating phase in LMFBR core disruptive accidents the energy deposition rate may not be nearly so high as originally thought and the development of material motion and interaction may take place on a time scale considerably larger than the classic disassembly time scale of milliseconds. This introduces a considerably different twist to the problem and it becomes apparent that processes heretofore ignored, such as differential motion and heat exchange, may become important. In addition, time scales may become long enough that substantial core material motion may take place and since rearrangement in more critical configurations cannot be absolutely precluded, capability for extended motion analysis, not easily performed with Lagrangian techniques in multi-dimensions, become desirable. Such considerations provided the motivation for developing a hydrodynamic algorithm to resolve these questions, and an Eulerian rather than Lagrangian frame of reference was chosen, primarily to handle extended motion and interpenetration. The results of the study are described

  12. A case study of the sensitivity of forecast skill to data and data analysis techniques

    Science.gov (United States)

    Baker, W. E.; Atlas, R.; Halem, M.; Susskind, J.

    1983-01-01

    A series of experiments have been conducted to examine the sensitivity of forecast skill to various data and data analysis techniques for the 0000 GMT case of January 21, 1979. These include the individual components of the FGGE observing system, the temperatures obtained with different satellite retrieval methods, and the method of vertical interpolation between the mandatory pressure analysis levels and the model sigma levels. It is found that NESS TIROS-N infrared retrievals seriously degrade a rawinsonde-only analysis over land, resulting in a poorer forecast over North America. Less degradation in the 72-hr forecast skill at sea level and some improvement at 500 mb is noted, relative to the control with TIROS-N retrievals produced with a physical inversion method which utilizes a 6-hr forecast first guess. NESS VTPR oceanic retrievals lead to an improved forecast over North America when added to the control.

  13. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...

  14. Chromatographic Techniques for Rare Earth Elements Analysis

    Science.gov (United States)

    Chen, Beibei; He, Man; Zhang, Huashan; Jiang, Zucheng; Hu, Bin

    2017-04-01

    The present capability of rare earth element (REE) analysis has been achieved by the development of two instrumental techniques. The efficiency of spectroscopic methods was extraordinarily improved for the detection and determination of REE traces in various materials. On the other hand, the determination of REEs very often depends on the preconcentration and separation of REEs, and chromatographic techniques are very powerful tools for the separation of REEs. By coupling with sensitive detectors, many ambitious analytical tasks can be fulfilled. Liquid chromatography is the most widely used technique. Different combinations of stationary phases and mobile phases could be used in ion exchange chromatography, ion chromatography, ion-pair reverse-phase chromatography and some other techniques. The application of gas chromatography is limited because only volatile compounds of REEs can be separated. Thin-layer and paper chromatography are techniques that cannot be directly coupled with suitable detectors, which limit their applications. For special demands, separations can be performed by capillary electrophoresis, which has very high separation efficiency.

  15. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  16. Structural mechanics research and development for main components of chinese 300 MWe PWR NPPs: from design to life management

    International Nuclear Information System (INIS)

    Yao Weida; Dou Yikang; Xie Yongcheng; He Yinbiao; Zhang Ming; Liang Xingyun

    2005-01-01

    Qinshan Nuclear Power Plant (Unit I), is a 300 MWe prototype PWR independently developed by Chinese own efforts, from design, manufacture, construction, installation, commissioning, to operation, inspection, maintenance, ageing management and lifetime assessment. Shanghai Nuclear Engineering Research and Design Institute (SNERDI) has taken up with and involved in deeply the R and D to tackle problems of this type of reactor since very beginning in early 1970s. Structural mechanics is one of the important aspects to ensure the safety and reliability for NPP components. This paper makes a summary on role of structural mechanics for component safety and reliability assessment in different stages of design, commissioning, operation, as well as lifetime assessment on this type PWR NPPs, including Qinshan-I and Chashma-I, a sister plant in Pakistan designed by SNERDI. The main contents of the paper cover design by analysis for key components of NSSS; mechanical problems relating to safety analysis; special problems relating to pressure retaining components, such as fracture mechanics, sealing analysis and its test verifications, etc.; experimental research on flow-induced vibration; seismic qualification for components; component failure diagnosis and root cause analysis; vibration qualification and diagnosis technique; component online monitoring technique; development of defect assessment; methodology of aging management and lifetime assessment for key components of NPPs, etc. (authors)

  17. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    International Nuclear Information System (INIS)

    Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.

    2012-01-01

    We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  18. Separating complex compound patient motion tracking data using independent component analysis

    Science.gov (United States)

    Lindsay, C.; Johnson, K.; King, M. A.

    2014-03-01

    In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient's body and record their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect and separate different motions while preserving nuanced features of the motion without the drawbacks of previous methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve motion correction in single photon emission computed tomography (SPECT) images.

  19. Analysis of results obtained with different cutting techniques and associated filtration systems for the dismantling of radioactive metallic components

    International Nuclear Information System (INIS)

    Bach, F.W.; Steiner, H.; Schreck, G.

    1993-01-01

    The present joint study performed by the Commissariat a l'energie atomique and the Universitaet Hannover and coordinated by the Commission of the European Communities was intended to analyse the results generated in a number of research contracts concerned with cutting tests in air and underwater, with consideration of the prevailing working conditions. The analysis has led to a large database, giving broadly-assessed information for the dismantling of radioactive components. The range of study was enlarged, where possible, to include recently obtained results outside the present research programme, consideration also being given to supplementary cutting tools and filtration systems not covered by the present programme. Data was concentrated in structured information packages on practical experience available for a series of cutting tools and filters. These were introduced into a computerized user-friendly databank, to be considered as a first-stage development, which should be continuously updated and possibly oriented in the future to an expert system

  20. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    Directory of Open Access Journals (Sweden)

    Zhengchun Du

    2016-05-01

    Full Text Available The use of three-dimensional (3D data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS. First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  1. Applying independent component analysis to clinical fMRI at 7 T

    Directory of Open Access Journals (Sweden)

    Simon Daniel Robinson

    2013-09-01

    Full Text Available Increased BOLD sensitivity at 7 T offers the possibility to increase the reliability of fMRI, but ultra-high field is also associated with an increase in artifacts related to head motion, Nyquist ghosting and parallel imaging reconstruction errors. In this study, the ability of Independent Component Analysis (ICA to separate activation from these artifacts was assessed in a 7 T study of neurological patients performing chin and hand motor tasks. ICA was able to isolate primary motor activation with negligible contamination by motion effects. The results of General Linear Model (GLM analysis of these data were, in contrast, heavily contaminated by motion. Secondary motor areas, basal ganglia and thalamus involvement were apparent in ICA results, but there was low capability to isolate activation in the same brain regions in the GLM analysis, indicating that ICA was more sensitive as well as more specific. A method was developed to simplify the assessment of the large number of independent components. Task-related activation components could be automatically identified via intuitive and effective features. These findings demonstrate that ICA is a practical and sensitive analysis approach in high field fMRI studies, particularly where motion is evoked. Promising applications of ICA in clinical fMRI include presurgical planning and the study of pathologies affecting subcortical brain areas.

  2. Study of relationship of selenium concentration in blood components and tumor tissues of breast and GI tract cancers using neutron activation analysis technique

    International Nuclear Information System (INIS)

    Othman, I.; Bakir, M. A.; Yassine, T.; Sarhel, A.

    2001-12-01

    The purpose of this study was to investigate the relationship between selenium (Se) concentration in blood components and tumour tissues of breast and GI tract cancers using neutron activation analysis. red blood cell (RBC) and serum Se concentrations were determined in 50 healthy volunteers aged 25-84 years, 70 breast cancer patients aged 25-70 years and 34 GI tract cancer patients aged 31-85 years, Se levels were also determined in malignant and adjacent normal tissues from breast cancer and GI tract cancer patients. The results showed that Se concentrations in serum and RBC were significantly lower among breast and GI cancer compared to healthy volunteers. The results also showed that Se concentrations were significantly higher in the cancer tissues compared to adjacent normal tissues. These data have shown a relationship between selenium status in blood components and both cancer. selenium is enriched in cancer tissue, possibly in an effort of the body to inhibit the growth of tumours. (author)

  3. Identification of components of fibroadenoma in cytology preparations using texture analysis: a morphometric study.

    Science.gov (United States)

    Singh, S; Gupta, R

    2012-06-01

    To evaluate the utility of image analysis using textural parameters obtained from a co-occurrence matrix in differentiating the three components of fibroadenoma of the breast, in fine needle aspirate smears. Sixty cases of histologically proven fibroadenoma were included in this study. Of these, 40 cases were used as a training set and 20 cases were taken as a test set for the discriminant analysis. Digital images were acquired from cytological preparations of all the cases and three components of fibroadenoma (namely, monolayered cell clusters, stromal fragments and background with bare nuclei) were selected for image analysis. A co-occurrence matrix was generated and a texture parameter vector (sum mean, energy, entropy, contrast, cluster tendency and homogeneity) was calculated for each pixel. The percentage of pixels correctly classified to a component of fibroadenoma on discriminant analysis was noted. The textural parameters, when considered in isolation, showed considerable overlap in their values of the three cytological components of fibroadenoma. However, the stepwise discriminant analysis revealed that all six textural parameters contributed significantly to the discriminant functions. Discriminant analysis using all the six parameters showed that the numbers of pixels correctly classified in training and tests sets were 96.7% and 93.0%, respectively. Textural analysis using a co-occurrence matrix appears to be useful in differentiating the three cytological components of fibroadenoma. These results could further be utilized in developing algorithms for image segmentation and automated diagnosis, but need to be confirmed in further studies. © 2011 Blackwell Publishing Ltd.

  4. Neutron activation analysis: an emerging technique for conservation/preservation

    International Nuclear Information System (INIS)

    Sayre, E.V.

    1976-01-01

    The diverse applications of neutron activation in analysis, preservation, and documentation of art works and artifacts are described with illustrations for each application. The uses of this technique to solve problems of attribution and authentication, to reveal the inner structure and composition of art objects, and, in some instances to recreate details of the objects are described. A brief discussion of the theory and techniques of neutron activation analysis is also included

  5. Diffraction analysis of customized illumination technique

    Science.gov (United States)

    Lim, Chang-Moon; Kim, Seo-Min; Eom, Tae-Seung; Moon, Seung Chan; Shin, Ki S.

    2004-05-01

    Various enhancement techniques such as alternating PSM, chrome-less phase lithography, double exposure, etc. have been considered as driving forces to lead the production k1 factor towards below 0.35. Among them, a layer specific optimization of illumination mode, so-called customized illumination technique receives deep attentions from lithographers recently. A new approach for illumination customization based on diffraction spectrum analysis is suggested in this paper. Illumination pupil is divided into various diffraction domains by comparing the similarity of the confined diffraction spectrum. Singular imaging property of individual diffraction domain makes it easier to build and understand the customized illumination shape. By comparing the goodness of image in each domain, it was possible to achieve the customized shape of illumination. With the help from this technique, it was found that the layout change would not gives the change in the shape of customized illumination mode.

  6. Component analysis and initial validity of the exercise fear avoidance scale.

    Science.gov (United States)

    Wingo, Brooks C; Baskin, Monica; Ard, Jamy D; Evans, Retta; Roy, Jane; Vogtle, Laura; Grimley, Diane; Snyder, Scott

    2013-01-01

    To develop the Exercise Fear Avoidance Scale (EFAS) to measure fear of exercise-induced discomfort. We conducted principal component analysis to determine component structure and Cronbach's alpha to assess internal consistency of the EFAS. Relationships between EFAS scores, BMI, physical activity, and pain were analyzed using multivariate regression. The best fit was a 3-component structure: weight-specific fears, cardiorespiratory fears, and musculoskeletal fears. Cronbach's alpha for the EFAS was α=.86. EFAS scores significantly predicted BMI, physical activity, and PDI scores. Psychometric properties of this scale suggest it may be useful for tailoring exercise prescriptions to address fear of exercise-related discomfort.

  7. Machine Learning Techniques for Arterial Pressure Waveform Analysis

    Directory of Open Access Journals (Sweden)

    João Cardoso

    2013-05-01

    Full Text Available The Arterial Pressure Waveform (APW can provide essential information about arterial wall integrity and arterial stiffness. Most of APW analysis frameworks individually process each hemodynamic parameter and do not evaluate inter-dependencies in the overall pulse morphology. The key contribution of this work is the use of machine learning algorithms to deal with vectorized features extracted from APW. With this purpose, we follow a five-step evaluation methodology: (1 a custom-designed, non-invasive, electromechanical device was used in the data collection from 50 subjects; (2 the acquired position and amplitude of onset, Systolic Peak (SP, Point of Inflection (Pi and Dicrotic Wave (DW were used for the computation of some morphological attributes; (3 pre-processing work on the datasets was performed in order to reduce the number of input features and increase the model accuracy by selecting the most relevant ones; (4 classification of the dataset was carried out using four different machine learning algorithms: Random Forest, BayesNet (probabilistic, J48 (decision tree and RIPPER (rule-based induction; and (5 we evaluate the trained models, using the majority-voting system, comparatively to the respective calculated Augmentation Index (AIx. Classification algorithms have been proved to be efficient, in particular Random Forest has shown good accuracy (96.95% and high area under the curve (AUC of a Receiver Operating Characteristic (ROC curve (0.961. Finally, during validation tests, a correlation between high risk labels, retrieved from the multi-parametric approach, and positive AIx values was verified. This approach gives allowance for designing new hemodynamic morphology vectors and techniques for multiple APW analysis, thus improving the arterial pulse understanding, especially when compared to traditional single-parameter analysis, where the failure in one parameter measurement component, such as Pi, can jeopardize the whole evaluation.

  8. Principal component analysis of the CT density histogram to generate parametric response maps of COPD

    Science.gov (United States)

    Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.

  9. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  10. Decision Analysis Technique

    Directory of Open Access Journals (Sweden)

    Hammad Dabo Baba

    2014-01-01

    Full Text Available One of the most significant step in building structure maintenance decision is the physical inspection of the facility to be maintained. The physical inspection involved cursory assessment of the structure and ratings of the identified defects based on expert evaluation. The objective of this paper is to describe present a novel approach to prioritizing the criticality of physical defects in a residential building system using multi criteria decision analysis approach. A residential building constructed in 1985 was considered in this study. Four criteria which includes; Physical Condition of the building system (PC, Effect on Asset (EA, effect on Occupants (EO and Maintenance Cost (MC are considered in the inspection. The building was divided in to nine systems regarded as alternatives. Expert's choice software was used in comparing the importance of the criteria against the main objective, whereas structured Proforma was used in quantifying the defects observed on all building systems against each criteria. The defects severity score of each building system was identified and later multiplied by the weight of the criteria and final hierarchy was derived. The final ranking indicates that, electrical system was considered the most critical system with a risk value of 0.134 while ceiling system scored the lowest risk value of 0.066. The technique is often used in prioritizing mechanical equipment for maintenance planning. However, result of this study indicates that the technique could be used in prioritizing building systems for maintenance planning

  11. Value impact analysis utilizing PRA techniques combined with a hybrid plant model

    International Nuclear Information System (INIS)

    Edson, J.L.; Stillwell, D.W.

    1989-01-01

    A value impact analysis (VIA) has been performed by the INEL to support a NRC Regulatory Analysis for resolution of Generic Issue (GI) 29, Bolting Degradation or Failure in Nuclear Power Plants. A VIA for replacing the reactor coolant pressure boundary (RCPB) bolts of BWRs and PWRs was previously prepared by Pacific Northwest Laboratories in 1985 under instructions limiting the VIA to the potential for failure of primary pressure boundary bolting. Subsequently the INEL was requested to perform a VIA that included non primary systems and component support bolts to be compatible with the resolution of the broader issue. Because the initial list of systems and bolting applications that could be included in the VIA was very large, including them all in the VIA would likely result in analyzing some that have little if any effect on public risk. This paper discusses how PRA techniques combined with a hybrid plant model were used to determine which bolts have the potential to be significant contributors to public risk if they were to fail, and therefore were included in the VIA

  12. Component Analysis and Identification of Black Tahitian Cultured Pearls From the Oyster Pinctada margaritifera Using Spectroscopic Techniques

    Science.gov (United States)

    Shi, L.; Wang, Y.; Liu, X.; Mao, J.

    2018-03-01

    Raman spectroscopy, ultraviolet, visible, and near infrared (UV-Vis-NIR) reflectance spectroscopy, and X-ray fluorescence (XRF) spectroscopy were used to characterize black Tahitian cultured pearls and imitations of these saltwater cultured pearls produced by γ-irradiation, and by coloring of cultured pearls with silver nitrate or organic dyes. Raman spectra indicated that aragonite was the major constituent of these four types of pearl. Using Raman spectroscopy at an excitation wavelength of 514 nm, black Tahitian cultured pearls exhibited characteristic 1100-1700 cm-1 bands. These bands were attributed to various organic components, including conchiolin and other black biological pigments. The peaks shown by saltwater cultured pearls colored with organic dyes varied with the type of dye used. Tahitian cultured and organic-dye-treated saltwater cultured pearls were easily identified by Raman spectroscopy. UV-Vis-NIR reflectance spectra showed bands at 408, 497, and 700 nm derived from porphyrin pigment and other black pigments. The spectra of dye-treated black saltwater pearls showed absorption peaks at 216, 261, 300, and 578 nm. The 261-nm absorption band disappeared from the spectra of γ-irradiated saltwater cultured pearls. This suggests the degradation of conchiolin in the γ-irradiated saltwater cultured pearls. XRF analysis revealed the presence of Ag on the surface of silver nitrate-dyed saltwater cultured pearls.

  13. Applications of ultrasonic phased array technique during fabrication of nuclear tubing and other components for the Indian nuclear power program

    International Nuclear Information System (INIS)

    Kapoor, K.

    2015-01-01

    Ultrasonic phased array technique has been applied in fabrication of nuclear fuel and structural at NFC. The integrity of the nuclear fuel and structural components is most crucial as they are exposed to severe environment during operation leading to rapid degradation of its properties during its lifecycle. Nuclear Fuel Complex has mandate for the fabrication of the nuclear fuel and core structurals for Indian PHWRs/BWR, sub-assemblies for the PFBR and steam generator tubing for PFBR and PHWRs which are the most critical materials for the Indian Nuclear Power program. NDE during fabrication of these materials is thus most crucial as it provides the confidence to the designer for safe operation during its lifetime. Many of these techniques have to be developed in-house to meet unique requirements of high sensitivity, resolution and shape of the components. Some of the advancements in the NDE during the fabrication include use of ultrasonic phased array which is detailed in this paper

  14. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  15. Cloud Masking for Remotely Sensed Data Using Spectral and Principal Components Analysis

    Directory of Open Access Journals (Sweden)

    A. Ahmad

    2012-06-01

    Full Text Available Two methods of cloud masking tuned to tropical conditions have been developed, based on spectral analysis and Principal Components Analysis (PCA of Moderate Resolution Imaging Spectroradiometer (MODIS data. In the spectral approach, thresholds were applied to four reflective bands (1, 2, 3, and 4, three thermal bands (29, 31 and 32, the band 2/band 1 ratio, and the difference between band 29 and 31 in order to detect clouds. The PCA approach applied a threshold to the first principal component derived from the seven quantities used for spectral analysis. Cloud detections were compared with the standard MODIS cloud mask, and their accuracy was assessed using reference images and geographical information on the study area.

  16. 48 CFR 15.404-1 - Proposal analysis techniques.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Proposal analysis techniques. 15.404-1 Section 15.404-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... assistance of other experts to ensure that an appropriate analysis is performed. (6) Recommendations or...

  17. Analyses of component degradation to evaluate maintenance effectiveness and aging effects

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hsu, F.; Subudhi, M.; Vesely, W.E.

    1991-01-01

    This paper describes degradation modeling, an approach for analyzing degradation and failure of components to understand the aging process of components. As used in our study, degradation modeling is the analysis of information on degradation of components for developing models of the degradation process and its implications. This modeling focuses on the analysis of the times of degradations of components, to model how the rate of degradation changes with the age of the component. With this methodology we also determine the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of degradation rates of components and failure rates of components from plant-specific data. The statistical techniques allow aging trends to be identified in the degradation data and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends. 2 refs., 8 figs., 1 tab

  18. Structural analysis of NPP components and structures

    International Nuclear Information System (INIS)

    Saarenheimo, A.; Keinaenen, H.; Talja, H.

    1998-01-01

    Capabilities for effective structural integrity assessment have been created and extended in several important cases. In the paper presented applications deal with pressurised thermal shock loading, PTS, and severe dynamic loading cases of containment, reinforced concrete structures and piping components. Hydrogen combustion within the containment is considered in some severe accident scenarios. Can a steel containment withstand the postulated hydrogen detonation loads and still maintain its integrity? This is the topic of Chapter 2. The following Chapter 3 deals with a reinforced concrete floor subjected to jet impingement caused by a postulated rupture of a near-by high-energy pipe and Chapter 4 deals with dynamic loading resistance of the pipe lines under postulated pressure transients due to water hammer. The reliability of the structural integrity analysing methods and capabilities which have been developed for application in NPP component assessment, shall be evaluated and verified. The resources available within the RATU2 programme alone cannot allow performing of the large scale experiments needed for that purpose. Thus, the verification of the PTS analysis capabilities has been conducted by participation in international co-operative programmes. Participation to the European Network for Evaluating Steel Components (NESC) is the topic of a parallel paper in this symposium. The results obtained in two other international programmes are summarised in Chapters 5 and 6 of this paper, where PTS tests with a model vessel and benchmark assessment of a RPV nozzle integrity are described. (author)

  19. Quantitative descriptive analysis and principal component analysis for sensory characterization of Indian milk product cham-cham.

    Science.gov (United States)

    Puri, Ritika; Khamrui, Kaushik; Khetra, Yogesh; Malhotra, Ravinder; Devraja, H C

    2016-02-01

    Promising development and expansion in the market of cham-cham, a traditional Indian dairy product is expected in the coming future with the organized production of this milk product by some large dairies. The objective of this study was to document the extent of variation in sensory properties of market samples of cham-cham collected from four different locations known for their excellence in cham-cham production and to find out the attributes that govern much of variation in sensory scores of this product using quantitative descriptive analysis (QDA) and principal component analysis (PCA). QDA revealed significant (p sensory attributes of cham-cham among the market samples. PCA identified four significant principal components that accounted for 72.4 % of the variation in the sensory data. Factor scores of each of the four principal components which primarily correspond to sweetness/shape/dryness of interior, surface appearance/surface dryness, rancid and firmness attributes specify the location of each market sample along each of the axes in 3-D graphs. These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring attributes of cham-cham that contribute most to its sensory acceptability.

  20. The Impact of Gas Turbine Component Leakage Fault on GPA Performance Diagnostics

    Directory of Open Access Journals (Sweden)

    E. L. Ntantis

    2016-01-01

    Full Text Available The leakage analysis is a key factor in determining energy loss from a gas turbine. Once the components assembly fails, air leakage through the opening increases resulting in a performance loss. Therefore, the performance efficiency of the engine cannot be reliably determined, without good estimates and analysis of leakage faults. Consequently, the implementation of a leakage fault within a gas turbine engine model is necessary for any performance diagnostic technique that can expand its diagnostics capabilities for more accurate predictions. This paper explores the impact of gas turbine component leakage fault on GPA (Gas Path Analysis Performance Diagnostics. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different component fault cases. Conclusively, to improve the reliability of the diagnostic results, a leakage fault analysis of the implemented faults is made. The diagnostic tool used to deal with the analysis of the gas turbine component implemented faults is a model-based method utilizing a non-linear GPA.