Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
variance components and genetic parameters for live weight
admin
Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Variance Component Selection With Applications to Microbiome Taxonomic Data
Jing Zhai
2018-03-01
Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
Heritability, variance components and genetic advance of some ...
Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
Variance components for body weight in Japanese quails (Coturnix japonica
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
Improving precision in gel electrophoresis by stepwisely decreasing variance components.
Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann
2009-10-15
Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
An elementary components of variance analysis for multi-center quality control
Munson, P.J.; Rodbard, D.
1977-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)
2016-09-15
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Principal component approach in variance component estimation for international sire evaluation
Jakobsen Jette
2011-05-01
Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in
Genetic variance components for residual feed intake and feed ...
Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...
Estimates of variance components for postweaning feed intake and ...
Mike
2013-03-09
Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...
Variance component and heritability estimates of early growth traits ...
as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.
An elementary components of variance analysis for multi-centre quality control
Munson, P.J.; Rodbard, D.
1978-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean
Advanced methods of analysis variance on scenarios of nuclear prospective
Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.
2011-01-01
Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.
Estimates of variance components for postweaning feed intake and ...
Feed efficiency is of major economic importance in beef production. The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait ...
Sinclair, D.F.; Williams, J.
1979-01-01
There have been significant developments in the design and use of neutron moisture meters since Hewlett et al.(1964) investigated the sources of variance when using this instrument to estimate soil moisture. There appears to be little in the literature, however, which updates these findings. This paper aims to isolate the components of variance when moisture content and moisture change are estimated using the neutron scattering method with current technology and methods
VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS
Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul
2005-01-01
Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Variance components estimation for farrowing traits of three purebred pigs in Korea
Bryan Irvine Lopez
2017-09-01
Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.
The Threat of Common Method Variance Bias to Theory Building
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Application of effective variance method for contamination monitor calibration
Goncalez, O.L.; Freitas, I.S.M. de.
1990-01-01
In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)
A Hold-out method to correct PCA variance inflation
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...
Gravity interpretation of dipping faults using the variance analysis method
Essa, Khalid S
2013-01-01
A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)
Bian, Zunjian; du, yongming; li, hua
2016-04-01
Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
Variability of indoor and outdoor VOC measurements: An analysis using variance components
Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.
2012-01-01
This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....
Krag, Kristian
The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...
Variance and covariance components for liability of piglet survival during different periods
Su, G; Sorensen, D; Lund, M S
2008-01-01
Variance and covariance components for piglet survival in different periods were estimated from individual records of 133 004 Danish Landrace piglets and 89 928 Danish Yorkshire piglets, using a liability threshold model including both direct and maternal additive genetic effects. At the individu...
Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging
Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-01-01
One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...
Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D
2013-07-03
Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.
Variance components and selection response for feather-pecking behavior in laying hens.
Su, G; Kjaer, J B; Sørensen, P
2005-01-01
Variance components and selection response for feather pecking behavior were studied by analyzing the data from a divergent selection experiment. An investigation indicated that a Box-Cox transformation with power lambda = -0.2 made the data approximately normally distributed and gave the best fit for the model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the investigated traits in both the low feather-pecking line (LP) and the high feather-pecking line (HP). Based on the mean of marginal posterior distribution, in the Box-Cox transformed scale, heritability for number of feather pecking bouts (FP bouts) was 0.174 in line LP and 0.139 in line HP. For number of feather-pecking pecks (FP pecks), heritability was 0.139 in line LP and 0.105 in line HP. No full-sib group effect and observation pen effect were found in the 2 traits. After 4 generations of selection, the total response for number of FP bouts in the transformed scale was 58 and 74% of the mean of the first generation in line LP and line HP, respectively. The total response for number of FP pecks was 47 and 46% of the mean of the first generation in line LP and line HP, respectively. The variance components and the realized selection response together suggest that genetic selection can be effective in minimizing FP behavior. This would be expected to reduce one of the major welfare problems in laying hens.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Heritability and variance components of some morphological and agronomic in alfalfa
Ates, E.; Tekeli, S.
2005-01-01
Four alfalfa cultivars were investigated using randomized complete-block design with three replications. Variance components, variance coefficients and heritability values of some morphological characters, herbage yield, dry matter yield and seed yield were determined. Maximum main stem height (78.69 cm), main stem diameter (4.85 mm), leaflet width (0.93 cm), seeds/pod (6.57), herbage yield (75.64 t ha/sub -1/), dry matter yield (20.06 t ha/sub -1/) and seed yield (0.49 t ha/sub -1/) were obtained from cv. Marina. Leaflet length varied from 1.65 to 2.08 cm. The raceme length measured 3.15 to 4.38 cm in alfalfa cultivars. The highest 1000-seeds weight values (2.42-2.49 g) were found from Marina and Sitel cultivars. Heritability values of various traits were: 91.0% for main stem height, 97.6% for main stem diameter, 81.8% for leaflet length, 88.8% for leaflet width, 90.4% for leaf/stem ratio, 28.3% for racemes/main stem, 99.0% for raceme length, 99.2% for seeds/pod, 88.0% for 1000-seeds weight, 97.2% for herbage yield, 99.6% for dry matter yield and 95.4% for seed yield. (author)
Variance bias analysis for the Gelbard's batch method
Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)
2014-05-15
In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.
Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M
2008-07-23
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.
Muhammad Cahyadi
2016-01-01
Full Text Available Quantitative trait locus (QTL is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC. F1 samples (n = 595 were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3 for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001 and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003. Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007 and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027 were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds.
Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke
Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...
How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach
Feistauer, Daniela; Richter, Tobias
2017-01-01
The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…
Matheus Costa dos Reis
2014-01-01
Full Text Available This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0 and in the third cycle (C3 of reciprocal recurrent selection (RRS which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22 and interpopulation (P12 and P21 from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10×10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2 and the covariance between these and their intrapopulation additive effects (CovAτ found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs.
Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov
2016-01-01
Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002
Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov
2015-10-01
Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.
Analysis of force variance for a continuous miner drum using the Design of Experiments method
S. Somanchi; V.J. Kecojevic; C.J. Bise [Pennsylvania State University, University Park, PA (United States)
2006-06-15
Continuous miners (CMs) are excavating machines designed to extract a variety of minerals by underground mining. The variance in force experienced by the cutting drum is a very important aspect that must be considered during drum design. A uniform variance essentially means that an equal load is applied on the individual cutting bits and this, in turn, enables better cutting action, greater efficiency, and longer bit and machine life. There are certain input parameters used in the drum design whose exact relationships with force variance are not clearly understood. This paper determines (1) the factors that have a significant effect on the force variance of the drum and (2) the values that can be assigned to these factors to minimize the force variance. A computer program, Continuous Miner Drum (CMD), was developed in collaboration with Kennametal, Inc. to facilitate the mechanical design of CM drums. CMD also facilitated data collection for determining significant factors affecting force variance. Six input parameters, including centre pitch, outer pitch, balance angle, shift angle, set angle and relative angle were tested at two levels. Trials were configured using the Design of Experiments (DoE) method where 2{sup 6} full-factorial experimental design was selected to investigate the effect of these factors on force variance. Results from the analysis show that all parameters except balance angle, as well as their interactions, significantly affect the force variance.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Poivey Jean-Paul
2011-09-01
Full Text Available Abstract Background The pre-weaning growth rate of lambs, an important component of meat market production, is affected by maternal and direct genetic effects. The French genetic evaluation model takes into account the number of lambs suckled by applying a multiplicative factor (1 for a lamb reared as a single, 0.7 for twin-reared lambs to the maternal genetic effect, in addition to including the birth*rearing type combination as a fixed effect, which acts on the mean. However, little evidence has been provided to justify the use of this multiplicative model. The two main objectives of the present study were to determine, by comparing models of analysis, 1 whether pre-weaning growth is the same trait in single- and twin-reared lambs and 2 whether the multiplicative coefficient represents a good approach for taking this possible difference into account. Methods Data on the pre-weaning growth rate, defined as the average daily gain from birth to 45 days of age on 29,612 Romane lambs born between 1987 and 2009 at the experimental farm of La Sapinière (INRA-France were used to compare eight models that account for the number of lambs per dam reared in various ways. Models were compared using the Akaike information criteria. Results The model that best fitted the data assumed that 1 direct (maternal effects correspond to the same trait regardless of the number of lambs reared, 2 the permanent environmental effects and variances associated with the dam depend on the number of lambs reared and 3 the residual variance depends on the number of lambs reared. Even though this model fitted the data better than a model that included a multiplicative coefficient, little difference was found between EBV from the different models (the correlation between EBV varied from 0.979 to 0.999. Conclusions Based on experimental data, the current genetic evaluation model can be improved to better take into account the number of lambs reared. Thus, it would be of
Variance-to-mean method generalized by linear difference filter technique
Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji
1998-01-01
The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power
The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination
Liangping Wu
2014-08-01
Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.
Donoghue, K A; Bird-Gardiner, T; Arthur, P F; Herd, R M; Hegarty, R F
2016-04-01
Ruminants contribute 80% of the global livestock greenhouse gas (GHG) emissions mainly through the production of methane, a byproduct of enteric microbial fermentation primarily in the rumen. Hence, reducing enteric methane production is essential in any GHG emissions reduction strategy in livestock. Data on 1,046 young bulls and heifers from 2 performance-recording research herds of Angus cattle were analyzed to provide genetic and phenotypic variance and covariance estimates for methane emissions and production traits and to examine the interrelationships among these traits. The cattle were fed a roughage diet at 1.2 times their estimated maintenance energy requirements and measured for methane production rate (MPR) in open circuit respiration chambers for 48 h. Traits studied included DMI during the methane measurement period, MPR, and methane yield (MY; MPR/DMI), with means of 6.1 kg/d (SD 1.3), 132 g/d (SD 25), and 22.0 g/kg (SD 2.3) DMI, respectively. Four forms of residual methane production (RMP), which is a measure of actual minus predicted MPR, were evaluated. For the first 3 forms, predicted MPR was calculated using published equations. For the fourth (RMP), predicted MPR was obtained by regression of MPR on DMI. Growth and body composition traits evaluated were birth weight (BWT), weaning weight (WWT), yearling weight (YWT), final weight (FWT), and ultrasound measures of eye muscle area, rump fat depth, rib fat depth, and intramuscular fat. Heritability estimates were moderate for MPR (0.27 [SE 0.07]), MY (0.22 [SE 0.06]), and the RMP traits (0.19 [SE 0.06] for each), indicating that genetic improvement to reduce methane emissions is possible. The RMP traits and MY were strongly genetically correlated with each other (0.99 ± 0.01). The genetic correlation of MPR with MY as well as with the RMP traits was moderate (0.32 to 0.63). The genetic correlation between MPR and the growth traits (except BWT) was strong (0.79 to 0.86). These results indicate that
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Variance of a potential of mean force obtained using the weighted histogram analysis method.
Cukier, Robert I
2013-11-27
A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.
Haley Christopher S
2009-01-01
Full Text Available Abstract Introduction Variance component QTL methodology was used to analyse three candidate regions on chicken chromosomes 1, 4 and 5 for dominant and parent-of-origin QTL effects. Data were available for bodyweight and conformation score measured at 40 days from a two-generation commercial broiler dam line. One hundred dams were nested in 46 sires with phenotypes and genotypes on 2708 offspring. Linear models were constructed to simultaneously estimate fixed, polygenic and QTL effects. Different genetic models were compared using likelihood ratio test statistics derived from the comparison of full with reduced or null models. Empirical thresholds were derived by permutation analysis. Results Dominant QTL were found for bodyweight on chicken chromosome 4 and for bodyweight and conformation score on chicken chromosome 5. Suggestive evidence for a maternally expressed QTL for bodyweight and conformation score was found on chromosome 1 in a region corresponding to orthologous imprinted regions in the human and mouse. Conclusion Initial results suggest that variance component analysis can be applied within commercial populations for the direct detection of segregating dominant and parent of origin effects.
Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods
Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.
2002-01-01
If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been
Isolating Trait and Method Variance in the Measurement of Callous and Unemotional Traits.
Paiva-Salisbury, Melissa L; Gill, Andrew D; Stickle, Timothy R
2017-09-01
To examine hypothesized influence of method variance from negatively keyed items in measurement of callous-unemotional (CU) traits, nine a priori confirmatory factor analysis model comparisons of the Inventory of Callous-Unemotional Traits were evaluated on multiple fit indices and theoretical coherence. Tested models included a unidimensional model, a three-factor model, a three-bifactor model, an item response theory-shortened model, two item-parceled models, and three correlated trait-correlated method minus one models (unidimensional, correlated three-factor, and bifactor). Data were self-reports of 234 adolescents (191 juvenile offenders, 43 high school students; 63% male; ages 11-17 years). Consistent with hypotheses, models accounting for method variance substantially improved fit to the data. Additionally, bifactor models with a general CU factor better fit the data compared with correlated factor models, suggesting a general CU factor is important to understanding the construct of CU traits. Future Inventory of Callous-Unemotional Traits analyses should account for method variance from item keying and response bias to isolate trait variance.
Peter Celec
2004-01-01
Full Text Available Cyclic variations of variables are ubiquitous in biomedical science. A number of methods for detecting rhythms have been developed, but they are often difficult to interpret. A simple procedure for detecting cyclic variations in biological time series and quantification of their probability is presented here. Analysis of rhythmic variance (ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries. A detailed stepwise calculation is presented including data entry and preparation, variance calculating, and difference testing. An example for the application of the procedure is provided, and a real dataset of the number of papers published per day in January 2003 using selected keywords is compared to randomized datasets. Randomized datasets show no cyclic variations. The number of papers published daily, however, shows a clear and significant (p<0.03 circaseptan (period of 7 days rhythm, probably of social origin
Jiang, Zhehan; Skorupski, William
2017-12-12
In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-01-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the
Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods
David P. Griesheimer
2017-09-01
Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.
Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control
Shanshan Gu
2015-01-01
Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming
2016-01-01
Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10
Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing
2018-02-01
In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.
Yang Jinan; Mihara, Takatsugu
1998-12-01
This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Gebregziabher Gebreyohannes
2013-09-01
Full Text Available The objective of this study was to estimate variance components and genetic parameters for lactation milk yield (LY, lactation length (LL, average milk yield per day (YD, initial milk yield (IY, peak milk yield (PY, days to peak (DP and parameters (ln(a and c of the modified incomplete gamma function (MIG in an Ethiopian multibreed dairy cattle population. The dataset was composed of 5,507 lactation records collected from 1,639 cows in three locations (Bako, Debre Zeit and Holetta in Ethiopia from 1977 to 2010. Parameters for MIG were obtained from regression analysis of monthly test-day milk data on days in milk. The cows were purebred (Bos indicus Boran (B and Horro (H and their crosses with different fractions of Friesian (F, Jersey (J and Simmental (S. There were 23 breed groups (B, H, and their crossbreds with F, J, and S in the population. Fixed and mixed models were used to analyse the data. The fixed model considered herd-year-season, parity and breed group as fixed effects, and residual as random. The single and two-traits mixed animal repeatability models, considered the fixed effects of herd-year-season and parity subclasses, breed as a function of cow H, F, J, and S breed fractions and general heterosis as a function of heterozygosity, and the random additive animal, permanent environment, and residual effects. For the analysis of LY, LL was added as a fixed covariate to all models. Variance components and genetic parameters were estimated using average information restricted maximum likelihood procedures. The results indicated that all traits were affected (p<0.001 by the considered fixed effects. High grade B×F cows (3/16B 13/16F had the highest least squares means (LSM for LY (2,490±178.9 kg, IY (10.5±0.8 kg, PY (12.7±0.9 kg, YD (7.6±0.55 kg and LL (361.4±31.2 d, while B cows had the lowest LSM values for these traits. The LSM of LY, IY, YD, and PY tended to increase from the first to the fifth parity. Single-trait analyses
A framework for sequential multiblock component methods
Smilde, A.K.; Westerhuis, J.A.; Jong, S.de
2003-01-01
Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework
Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method
Younes Elahi
2014-01-01
Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.
Method of nickel-plating large components
Wilbuer, K.
1997-01-01
The invention concerns a method of nickel-plating components, according to which even large components can be provided with an adequate layer of nickel which is pore- and stress-free and such that water is not lost. According to the invention, the component is heated and, after heating, is pickled, rinsed, scoured, plated in an electrolysis process, and rinsed again. (author)
Rawat, K.K.; Subbaiah, K.V.
1996-01-01
General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs
Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.
2007-01-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool
Zhai, Qingqing; Yang, Jun; Zhao, Yu
2014-01-01
Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Angulo-Molina, Aracely
2017-04-01
In this paper a new methodology to detect and differentiate melanoma cells from normal cells through 1D-signatures averaged variances calculated with a binary mask is presented. The sample images were obtained from histological sections of mice melanoma tumor of 4 [Formula: see text] in thickness and contrasted with normal cells. The results show that melanoma cells present a well-defined range of averaged variances values obtained from the signatures in the four conditions used.
Modified cleaning method for biomineralized components
Tsutsui, Hideto; Jordan, Richard W.
2018-02-01
The extraction and concentration of biomineralized components from sediment or living materials is time consuming and laborious and often involves steps that remove either the calcareous or siliceous part, in addition to organic matter. However, a relatively quick and easy method using a commercial cleaning fluid for kitchen drains, sometimes combined with a kerosene soaking step, can produce remarkable results. In this study, the method is applied to sediments and living materials bearing calcareous (e.g., coccoliths, foraminiferal tests, holothurian ossicles, ichthyoliths, and fish otoliths) and siliceous (e.g., diatom valves, silicoflagellate skeletons, and sponge spicules) components. The method preserves both components in the same sample, without etching or partial dissolution, but is not applicable to unmineralized components such as dinoflagellate thecae, tintinnid loricae, pollen, or plant fragments.
MCNP variance reduction overview
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu
2003-01-01
Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)
Methods of measuring residual stresses in components
Rossini, N.S.; Dassisti, M.; Benyounis, K.Y.; Olabi, A.G.
2012-01-01
Highlights: ► Defining the different methods of measuring residual stresses in manufactured components. ► Comprehensive study on the hole drilling, neutron diffraction and other techniques. ► Evaluating advantage and disadvantage of each method. ► Advising the reader with the appropriate method to use. -- Abstract: Residual stresses occur in many manufactured structures and components. Large number of investigations have been carried out to study this phenomenon and its effect on the mechanical characteristics of these components. Over the years, different methods have been developed to measure residual stress for different types of components in order to obtain reliable assessment. The various specific methods have evolved over several decades and their practical applications have greatly benefited from the development of complementary technologies, notably in material cutting, full-field deformation measurement techniques, numerical methods and computing power. These complementary technologies have stimulated advances not only in measurement accuracy and reliability, but also in range of application; much greater detail in residual stresses measurement is now available. This paper aims to classify the different residual stresses measurement methods and to provide an overview of some of the recent advances in this area to help researchers on selecting their techniques among destructive, semi destructive and non-destructive techniques depends on their application and the availabilities of those techniques. For each method scope, physical limitation, advantages and disadvantages are summarized. In the end this paper indicates some promising directions for future developments.
Forward-Weighted CADIS Method for Variance Reduction of Monte Carlo Reactor Analyses
Wagner, John C.; Mosher, Scott W.
2010-01-01
Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses use high-fidelity transport codes to produce few-group parameters at the assembly level for use in low-order methods applied at the core level. Monte Carlo (MC) methods, which allow detailed and accurate modeling of the full geometry and energy details and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the several-decade-old methodology used in current practice. However, the prohibitive computational requirements associated with obtaining fully converged system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. A goal of current research at Oak Ridge National Laboratory (ORNL) is to change this paradigm by enabling the direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome is the slow non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, research has focused on development in the following two areas: (1) a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The focus of this paper is limited to the first area mentioned above. It describes the FW-CADIS method applied to variance reduction of MC reactor analyses and provides initial results for calculating
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Analysis Method for Integrating Components of Product
Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)
2017-04-15
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Analysis Method for Integrating Components of Product
Choi, Jun Ho; Lee, Kun Sang
2017-01-01
This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.
Kirton, A
2010-08-01
Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...
Morales P, J.R.; Avila P, P.
1996-01-01
If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)
The spatial variance of hill slope erosion in Loess Hilly Area by 137Cs tracing method
Li Mian; Yang Jianfeng; Shen Zhenzhou; Hou Jiancai
2009-01-01
Based on analysis of 137 Cs activities in soil profiles on hill slope of different slope lengths in the Loess Hilly Area in China, the spatial variance of erosion was studied. The results show that the slope length has great impact on the spatial distribution of the soil erosion intensity, and the soil erosion intensity on loess hill slope was in a fluctuating tendency. In the influx process of runoff in a small watershed, net soil loss intensity increased first and then decreased with flow distance. (authors)
Variance component estimation with longitudinal data: a simulation study with alternative methods
Simone Inoe Araujo
2009-01-01
Full Text Available A pedigree structure distributed in three different places was generated. For each offspring, phenotypicinformation was generated for five different ages (12, 30, 48, 66 and 84 months. The data file was simulated allowing someinformation to be lost (10, 20, 30 and 40% by a random process and by selecting the ones with lower phenotypic values,representing the selection effect. Three alternative analysis were used, the repeatability model, random regression model andmultiple-trait model. Random regression showed to be more adequate to continually describe the covariance structure ofgrowth over time than single-trait and repeatability models, when the assumption of a correlation between successivemeasurements in the same individual was different from one another. Without selection, random regression and multiple-traitmodels were very similar.
Orth, Ulrich
2013-10-01
Previous research suggests that the personality of a relationship partner predicts not only the individual's own satisfaction with the relationship but also the partner's satisfaction. Based on the actor-partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.
Method of formation of thin film component
Wada, Chikara; Kato, Kinya
1988-04-16
In the production process of component which is carrying thin film device, such as thin film transistor, acid treatment is applied for etching or for preventing contamination. In case of barium borsilicate glass base, the base is affected by the acid treatment resulting the decrease of transparency. To avoid the effect, deposition of SiO/sub 2/ layer on the surface of the base is usually applied. This invention relates to the protective method of barium borosilicate surface by harnessing the effect of coexisting ion in the acid treatment bath. The method is to add 0.03-5 mol/l of phosphoric acid or its salt in the bath. By the effect of coexisting ion, barium borsilicate glass surface was protected from the damage. (2 figs)
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests
A practical look at Monte Carlo variance reduction methods in radiation shielding
Olsher, Richard H. [Los Alamos National Laboratory, Los Alamos (United States)
2006-04-15
With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of Variance Reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered the areas of source definition, skyshine, streaming, and transmission.
A practical look at Monte Carlo variance reduction methods in radiation shielding
Olsher, Richard H.
2006-01-01
With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of Variance Reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered the areas of source definition, skyshine, streaming, and transmission
K M Mohammed
2018-01-01
Full Text Available Objective: To study the genetic and non-genetic factors and their interactions affecting growth rate and body weights at birth, weaning and at 6 months of age in Saudi Ardi, Damascus goats and their crosses.Methods: Crossbreeding program between Saudi Ardi(A goats with Damascus(D was carried out to improve the meat productivity of Ardi goats through crossbreeding. The pedigree records of the body weights were obtained from 754 kids (397 males and 357 females produced from 46 Sires and 279 Dams. Birth weight, weaning weigh and 6 months weight as well as average daily gain during different growth stages from birth to weaning (D1, weaning to 6 months (D2 and from birth to 6 months of age (D3 were recorded during winter/autumn and summer/spring. Data were classified according to breed, generation, sex, season, year, and type of birth. Data were analyzed using GLM procedure for the least-squares means of the fixed factors. Heritability and genetic parameters were estimated with derivative-free restricted maximum likelihood procedures using the MTDFREML program.Results: The percentages of variations were moderate for body weights and high for daily gains. Genetic groups had a highly significant (P<0.01 effect on the body weights traits. Damascus goats had higher (P<0.01 birth and weaning weights, but ½D½A group kids had a higher (P<0.01 body weight at 6 months. The genetic groups had a significant effects on the daily weight gains for D1 (P<0.01 and D3 (P<0.05 periods, whereas, it had no effects on D2 period. The fixed effects of sex, season, year and type of birth were significant differences for body weights. Male kids were heavier (P<0.01 than females for different growth stages. Body weights and daily gains during winter/autumn were significantly higher (P<0.01 than summer/ spring. Kids born and raised as singles were significantly (P<0.01 heavier than those were born as twins or triplets. The genetic and phenotypic correlations between birth
Anomaly detection in OECD Benchmark data using co-variance methods
Srinivasan, G.S.; Krinizs, K.; Por, G.
1993-02-01
OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab
A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation
Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)
2012-11-01
This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.
Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.
2011-07-01
Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...
Method of using infrared radiation for assembling a first component with a second component
Sikka, Vinod K.; Whitson, Barry G.; Blue, Craig A.
1999-01-01
A method of assembling a first component for assembly with a second component involves a heating device which includes an enclosure having a cavity for inserting a first component. An array of infrared energy generators is disposed within the enclosure. At least a portion of the first component is inserted into the cavity, exposed to infrared energy and thereby heated to a temperature wherein the portion of the first component is sufficiently softened and/or expanded for assembly with a second component.
Study To Build Method For Analyzing Some Component Of Airborne Which Cause Respiratory Disease
Vo Thi Anh; Nguyen Thuy Binh; Vuong Thu Bac; Ha Lan Anh; Nguyen Hong Thinh; Duong Van Thang; Nguyen Mai Anh; Vo Tuong Hanh
2013-01-01
Aerosol sampler is located at the top of the three floors building of INST. The amount of PM particle and components such as black carbon; chemical elements; violated organic compounds and microorganisms are analyzed by appropriate methods. Using the method of regression and analysis of variance ANOVA to find out correlation between there pollution components and patients treated at the Department of Respiratory in Hanoi E-Hospital. It shown that microorganisms, benzene, toluene, element sulfur and element silica have effects on monthly number of patients treated respiratory diseases at the E-Hospital. (author)
A balancing method for calculating a component raw involving CGF
Kim, K.; Kang, D.; Yang, J.E.
2004-01-01
In this paper, a method called the 'Balancing Method' to derive a component RAW (Risk Achievement Worth) with basic event RAWs including a CCF (Common Cause Failure) RAW is summarized, and compared with the method proposed by the NEI (Nuclear Energy Institute) by mathematically checking the background on which the two methods are based. It is proved that the Balancing Method has a strong mathematically background. While the NEI method significantly underestimates the component RAW and is a little bit ad hoc in handling CCF RAW, the Balancing Method estimates the true component RAW very closely. Validity of the Balancing Method is based on the fact that if an component is out-of-service, it does not mean that the component is non-existent, but integrates the possibility that the component might fail due to CCF. The validity of the Balancing Method is proved by comparing it to the exact component RAW generated from the fault tree model
Method to chemically decontaminate nuclear reactor components
Bertholdt, H.O.
1984-01-01
The large decontamination of components of the primary circuit of activated corrosion products in the oxide layer of the structure materials firstly involves an approx. 1 hour oxidation treatment with alkali permanganate solution. Following intermediate rinsing with deionate, they are etched with an inhibited citrate-oxalate solution for 5-20 hours. This is followed by post-treatment with a citric acid/H2O2 solution containing suspended fiber particles. (orig./PW)
Reddy, L Ram Gopal; Kuntamalla, Srinivas
2011-01-01
Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.
Ezzati, A.O.; Sohrabpour, M.
2013-01-01
In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.
Software Components and Formal Methods from a Computational Viewpoint
Lambertz, Christian
2012-01-01
Software components and the methodology of component-based development offer a promising approach to master the design complexity of huge software products because they separate the concerns of software architecture from individual component behavior and allow for reusability of components. In combination with formal methods, the specification of a formal component model of the later software product or system allows for establishing and verifying important system properties in an automatic a...
Approximation errors during variance propagation
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)
2009-08-15
It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.
Conte, Elio; Federici, Antonio; Zbilut, Joseph P.
2009-01-01
It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.
Jaksic, Vesna; Mandic, Danilo P.; Karoumi, Raid; Basu, Bidroha; Pakrashi, Vikram
2016-01-01
Analysis of the variability in the responses of large structural systems and quantification of their linearity or nonlinearity as a potential non-invasive means of structural system assessment from output-only condition remains a challenging problem. In this study, the Delay Vector Variance (DVV) method is used for full scale testing of both pseudo-dynamic and dynamic responses of two bridges, in order to study the degree of nonlinearity of their measured response signals. The DVV detects the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. The pseudo-dynamic data is obtained from a concrete bridge during repair while the dynamic data is obtained from a steel railway bridge traversed by a train. We show that DVV is promising as a marker in establishing the degree to which a change in the signal nonlinearity reflects the change in the real behaviour of a structure. It is also useful in establishing the sensitivity of instruments or sensors deployed to monitor such changes.
A balancing method for calculating a component raw involving CGF
Kim, K.; Kang, D.; Yang, J.E. [Integrated Safety Assessment Division, Korea Atomic Energy Research Institute, Daejon (Korea, Republic of)
2004-07-01
In this paper, a method called the 'Balancing Method' to derive a component RAW (Risk Achievement Worth) with basic event RAWs including a CCF (Common Cause Failure) RAW is summarized, and compared with the method proposed by the NEI (Nuclear Energy Institute) by mathematically checking the background on which the two methods are based. It is proved that the Balancing Method has a strong mathematically background. While the NEI method significantly underestimates the component RAW and is a little bit ad hoc in handling CCF RAW, the Balancing Method estimates the true component RAW very closely. Validity of the Balancing Method is based on the fact that if an component is out-of-service, it does not mean that the component is non-existent, but integrates the possibility that the component might fail due to CCF. The validity of the Balancing Method is proved by comparing it to the exact component RAW generated from the fault tree model.
Dual phase magnetic material component and method of forming
Dial, Laura Cerully; DiDomizio, Richard; Johnson, Francis
2017-04-25
A magnetic component having intermixed first and second regions, and a method of preparing that magnetic component are disclosed. The first region includes a magnetic phase and the second region includes a non-magnetic phase. The method includes mechanically masking pre-selected sections of a surface portion of the component by using a nitrogen stop-off material and heat-treating the component in a nitrogen-rich atmosphere at a temperature greater than about 900.degree. C. Both the first and second regions are substantially free of carbon, or contain only limited amounts of carbon; and the second region includes greater than about 0.1 weight % of nitrogen.
Bouwman, R; Broeders, M; Van Engen, R; Young, K; Lazzari, B; Ravaglia, V
2009-01-01
According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does not discriminate sufficiently between systems with and without additional noise besides quantum noise. This paper attempts to give an alternative and relatively simple method for noise analysis which can divide noise into electronic noise, structured noise and quantum noise. Quantum noise needs to be the dominant noise source in clinical images for optimal performance of a digital mammography system, and therefore the amount of electronic and structured noise should be minimal. For several digital mammography systems, the noise was separated into components based on the measured pixel value, standard deviation (SD) of the image and the detector entrance dose. The results showed that differences between systems exist. Our findings confirm that the proposed method is able to discriminate systems based on their noise performance and is able to detect possible quality problems. Therefore, we suggest to replace the current method for noise analysis as described in the European Guidelines by the alternative method described in this paper.
Multilayer electronic component systems and methods of manufacture
Thompson, Dane (Inventor); Wang, Guoan (Inventor); Kingsley, Nickolas D. (Inventor); Papapolymerou, Ioannis (Inventor); Tentzeris, Emmanouil M. (Inventor); Bairavasubramanian, Ramanan (Inventor); DeJean, Gerald (Inventor); Li, RongLin (Inventor)
2010-01-01
Multilayer electronic component systems and methods of manufacture are provided. In this regard, an exemplary system comprises a first layer of liquid crystal polymer (LCP), first electronic components supported by the first layer, and a second layer of LCP. The first layer is attached to the second layer by thermal bonds. Additionally, at least a portion of the first electronic components are located between the first layer and the second layer.
Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.
2010-01-01
Research to date has revealed divergent relations across factors of psychopathy measures with criteria of "internalizing" (INT; anxiety, depression) and "externalizing" (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings…
Fletcher, B. C.
1972-01-01
The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Training Methods for Image Noise Level Estimation on Wavelet Components
A. De Stefano
2004-12-01
Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.
Multistage principal component analysis based method for abdominal ECG decomposition
Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas
2015-01-01
Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)
Benefits of balancing method for component RAW importance measure
Kim, Kil Yoo; Yang, Joon Eon
2005-01-01
In the Risk Informed Regulation and Applications (RIR and A), the determination of risk significant Structure, System and Components (SSCs) plays an important role, and importance measures such as Fussell-Vesely (FV) and RAW (Risk Achievement Worth) are widely used in the determination of risk significant SSCs. For example, in the Maintenance Rule, Graded Quality Assurance(GQA) and Option 2, FV and RAW are used in the categorization of SSCs. Especially, in the GQA and Option 2, the number of SSCs to be categorized is too many to handle, so the FVs and RAWs of the components are practically derived in a convenient way with those of the basic events which have already been acquired as PSA (Probabilistic Safety Assessment) results instead of by reevaluating the fault tree/event tree of the PSA model. That is, the group FVs and RAWs for the components are derived from the FVs and RAWs of the basic events which consist of the group. Here, the basic events include random failure, Common Cause Failure (CCF), test and maintenance, etc. which make the system unavailable. A method called 'Balancing Method' which can practically and correctly derive the component RAW with the basic event FVs and RAWs even if CCFs exists as basic events was introduced in Ref.. However, 'Balancing Method' has other advantage, i.e., it can also fairly correctly derive component RAW using fault tree without using basic events FVs and RAWs
Means and Variances without Calculus
Kinney, John J.
2005-01-01
This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
16 CFR 1509.6 - Component-spacing test method.
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Component-spacing test method. 1509.6 Section 1509.6 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT... applied to the wedge perpendicular to the plane of the crib side. ...
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Lillhök, J.; Persson, L.; Andersen, Claus E.
2017-01-01
, the dose-average lineal energy, the dose-average quality factor and the dose equivalent. The neutron component measured by the detectors at the proton beam was studied through Monte Carlo simulations using the code MCNP6. In the photon beam the stray absorbed dose ranged between 0.3 and 2.4 μGy per monitor...
Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M. [Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, Davis, California, 95616 (United States)
2010-07-15
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.
Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)
2015-06-07
The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.
Zou, Wenli; Filatov, Michael; Cremer, Dieter
2015-06-01
The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Noack, K.
1981-01-01
The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered [ru
Wedemeyer, Gary A.; Nelson, Nancy C.
1975-01-01
Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.
Estimation of measurement variances
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.
2009-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)
Additive manufacturing method for SRF components of various geometries
Rimmer, Robert; Frigola, Pedro E; Murokh, Alex Y
2015-05-05
An additive manufacturing method for forming nearly monolithic SRF niobium cavities and end group components of arbitrary shape with features such as optimized wall thickness and integral stiffeners, greatly reducing the cost and technical variability of conventional cavity construction. The additive manufacturing method for forming an SRF cavity, includes atomizing niobium to form a niobium powder, feeding the niobium powder into an electron beam melter under a vacuum, melting the niobium powder under a vacuum in the electron beam melter to form an SRF cavity; and polishing the inside surface of the SRF cavity.
João Batista Duarte
2001-09-01
Full Text Available O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância, ML (máxima verossimilhança, REML (máxima verossimilhança restrita e MIVQUE(0 (estimador quadrático não viesado de variância mínima, no delineamento de blocos aumentados com tratamentos adicionais (progênies de uma ou mais procedências (cruzamentos. Os resultados indicaram superioridade relativa do método MIVQUE(0. O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental ( e a superestimar as variâncias genotípicas (, em especial nos experimentos de menor tamanho (n/>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.This work compares by simulation estimates of variance components produced by the ANOVA (analysis of variance, ML (maximum likelihood, REML (restricted maximum likelihood, and MIVQUE(0 (minimum variance quadratic unbiased estimator methods for augmented block design with additional treatments (progenies stemming from one or more origins (crosses. Results showed the superiority of the MIVQUE(0 estimation. The ANOVA method, although unbiased, showed estimates with lower precision. The ML and REML methods produced downwards biased estimates for error variance (, and upwards biased estimates for genotypic variances (, particularly the ML method. Biases for the REML estimation became negligible when progenies were derived from a single cross, and experiments were of larger size with ratios />0.5. This method, however, provided the worst estimates for genotypic variances when progenies were derived from several crosses and the experiments were of small size (n<120 observations.
Development of motion image prediction method using principal component analysis
Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma
2012-01-01
Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)
Improved computation method in residual life estimation of structural components
Maksimović Stevan M.
2013-01-01
Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001
Bouwman, R.; Young, K.; Lazzari, B.; Ravaglia, V.; Broeders, M.J.M.; Engen, R. van
2009-01-01
According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Yang, M; Zhu, X R; Mohan, R; Dong, L; Virshup, G; Clayton, J
2010-01-01
We discovered an empirical relationship between the logarithm of mean excitation energy (ln I m ) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.
Fast and accurate methods of independent component analysis: A survey
Tichavský, Petr; Koldovský, Zbyněk
2011-01-01
Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf
Three-Component Forward Modeling for Transient Electromagnetic Method
Bin Xiong
2010-01-01
Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.
Methods for integrating a functional component into a microfluidic device
Simmons, Blake; Domeier, Linda; Woo, Noble; Shepodd, Timothy; Renzi, Ronald F.
2014-08-19
Injection molding is used to form microfluidic devices with integrated functional components. One or more functional components are placed in a mold cavity, which is then closed. Molten thermoplastic resin is injected into the mold and then cooled, thereby forming a solid substrate including the functional component(s). The solid substrate including the functional component(s) is then bonded to a second substrate, which may include microchannels or other features.
Article, component, and method of forming an article
Lacy, Benjamin Paul; Itzel, Gary Michael; Kottilingam, Srikanth Chandrudu; Dutta, Sandip; Schick, David Edward
2018-05-22
An article and method of forming an article are provided. The article includes a body portion separating an inner region and an outer region, an aperture in the body portion, the aperture fluidly connecting the inner region to the outer region, and a conduit extending from an outer surface of the body portion at the aperture and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The method includes providing a body portion separating an inner region and an outer region, providing an aperture in the body portion, and forming a conduit over the aperture, the conduit extending from an outer surface of the body portion and being arranged and disposed to controllably direct fluid from the inner region to the outer region. The article is arranged and disposed for insertion within a hot gas path component.
Improved methods of creep-fatigue life assessment of components
Scholz, Alfred; Berger, Christina [Inst. fuer Werkstoffkunde (IfW), Technische Univ. Darmstadt (Germany)
2009-07-01
The improvement of life assessment methods contributes to a reduction of efforts at design and an effective long term operation of high temperature components, reduces technical risk and increases high economical advantages. Creep-fatigue at multi-stage loading, covering cold start, warm start and hot start cycles in typical loading sequences e.g. for medium loaded power plants, was investigated here. At hold times creep and stress relaxation, respectively, lead to an acceleration of crack initiation. Creep fatigue life time can be calculated by a modified damage accumulation rule, which considers the fatigue fraction rule for fatigue damage and the life fraction rule for creep damage. Mean stress effects, internal stress and interaction effects of creep and fatigue are considered. Along with the generation of advanced creep data, fatigue data and creep fatigue data as well scatter band analyses are necessary in order to generate design curves and lower bound properties inclusive. Besides, in order to improve lifing methods the enhancement of modelling activities for deformation and life time are important. For verification purposes, complex experiments at variable creep conditions as well as at creep fatigue interaction under multi-stage loading are of interest. Generally, the development of methods to transfer uniaxial material properties to multiaxial loading situations is a current challenge. For specific design purposes, a constitutive material model is introduced which is implemented as an user subroutine for Finite Element applications due to start-up and shut-down phases of components. Identification of material parameters have been performed by Neural Networks. (orig.)
Probabilistic structural analysis methods for select space propulsion system components
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
Enterprise’s employment potential: concept, components and evaluation methods
Korbut K.Ye.
2017-06-01
Full Text Available The present study deals with the main interpretations and views of scientists on the economic category «labor potential». The conditions and factors affecting the labor potential are given. The author makes the classification and provides the general characteristic to the factors, which characterize the mechanism of formation of the labor potential. The detailed description of the main components and the components of the labor potential at the enterprise has been determined, analyzed and provided. The levels of the labor potential manifestation are summarized, and examined, and the explanation is given to each of them. The general characteristic of the constituent elements of workers’ labor potential is provided. The principal data of the labor potential at the micro level are provided. The main types of the labor potential at the enterprise are singled out and characterized in detail by the level of aggregated estimates, by the range of coverage of opportunities, by the nature of participation in the production and economic process and by the place in the socio-economic system of the enterprise. Considerable attention is paid to the views of scientists on the main methods of assessing the labor potential of the enterprise.
Development of impact design methods for ceramic gas turbine components
Song, J.; Cuccio, J.; Kington, H.
1990-01-01
Impact damage prediction methods are being developed to aid in the design of ceramic gas turbine engine components with improved impact resistance. Two impact damage modes were characterized: local, near the impact site, and structural, usually fast fracture away from the impact site. Local damage to Si3N4 impacted by Si3N4 spherical projectiles consists of ring and/or radial cracks around the impact point. In a mechanistic model being developed, impact damage is characterized as microcrack nucleation and propagation. The extent of damage is measured as volume fraction of microcracks. Model capability is demonstrated by simulating late impact tests. Structural failure is caused by tensile stress during impact exceeding material strength. The EPIC3 code was successfully used to predict blade structural failures in different size particle impacts on radial and axial blades.
System and method for manufacture of airfoil components
Moors, Thomas Michael
2016-11-29
Embodiments of the present disclosure relate generally to systems and methods for manufacturing an airfoil component. The system can include: a geometrical mold; an elongated flexible sleeve having a closed-off interior and positioned within the geometrical mold, wherein the elongated flexible sleeve is further positioned to have a desired geometry; an infusing channel in fluid communication with the closed-off interior of the elongated flexible sleeve and configured to communicate a resinous material thereto; a vacuum channel in fluid communication with the closed-off interior of the elongated flexible sleeve and configured to vacuum seal the closed-off interior of the elongated flexible sleeve; and a glass fiber layer positioned within the closed-off interior of the elongated flexible sleeve.
Anomaly Monitoring Method for Key Components of Satellite
Jian Peng
2014-01-01
Full Text Available This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM, which is made up of state estimation based on Multivariate State Estimation Techniques (MSET and anomaly detection based on Sequential Probability Ratio Test (SPRT. On the basis of analysis failure of lithium-ion batteries (LIBs, we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (Re and the charge transfer resistance (Rct as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (RX and healthy residual value (RL of LIBs based on the state estimation of MSET, and then, through the residual values (RX and RL of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM.
Estimation of measurement variances
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.
2011-01-01
Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic
Total error components - isolation of laboratory variation from method performance
Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.
1992-01-01
The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision
METHOD FOR DETERMINATION OF FOCAL PLANE LOCATION OF FOCUSING COMPONENTS
A. I. Ivashko
2017-01-01
Full Text Available Mass-production of different laser systems often requires utilization of the focal spot size method for determination of output laser beam spatial characteristics. The main challenge of this method is high accuracy maintenance of a CCD camera beam profiler in the collecting lens focal plane. The aim of our work is development of new method for placing of photodetector array in the collecting lens focal plane with high accuracy.Proposed technique is based on focusing of several parallel laser beams. Determination of the focal plane position requires only longitudinal translation of the CCD-camera to find a point of laser beams intersection. Continuous-wave (CW diode-pumped laser emitting in the spectral region near 1μm was created to satisfy the requirements of the developed technique. Designed microchip laser generates two stigmatic Gaussian beams with automatically parallel beam axes due to independent pumping of different areas of the one microchip crystal having the same cavity mirrors.It was theoretically demonstrated that developed method provides possibility of the lenses focal plane determination with 1 % accuracy. The microchip laser generates two parallel Gaussian beams with divergence of about 10 mrad. Laser output power can be varied in the range of 0.1–1.5 W by changing the pumping laser diode electrical current. The distance between two beam axes can be changed in the range of 0.5–5.0 mm.We have proposed method for determination of positive lens focal plane location by using of CCDcamera and two laser beams with parallel axes without utilization of additional optical devices. We have developed CW longitudinally diode pumped microchip laser emitting in the 1-μm spectral region that can be used in the measuring instrument that doesn’t require precision mechanical components for determination of focal plane location with 1 % accuracy. The overall dimensions of laser head was 70 × 40 × 40 mm3 and maximum power consumption was
Guillaume Calmettes
Full Text Available The genetic expression of cloned fluorescent proteins coupled to time-lapse fluorescence microscopy has opened the door to the direct visualization of a wide range of molecular interactions in living cells. In particular, the dynamic translocation of proteins can now be explored in real time at the single-cell level. Here we propose a reliable, easy-to-implement, quantitative image processing method to assess protein translocation in living cells based on the computation of spatial variance maps of time-lapse images. The method is first illustrated and validated on simulated images of a fluorescently-labeled protein translocating from mitochondria to cytoplasm, and then applied to experimental data obtained with fluorescently-labeled hexokinase 2 in different cell types imaged by regular or confocal microscopy. The method was found to be robust with respect to cell morphology changes and mitochondrial dynamics (fusion, fission, movement during the time-lapse imaging. Its ease of implementation should facilitate its application to a broad spectrum of time-lapse imaging studies.
A versatile omnibus test for detecting mean and variance heterogeneity.
Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.
Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J.
2014-01-01
For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.
Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Engineering and Technology
2014-11-15
For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.
Elmer Francisco Valencia Tapia
2011-06-01
Full Text Available Avaliou-se a heterogeneidade dos componentes de variância e seu efeito nas estimativas de herdabilidade e repetibilidade da produção de leite de bovinos da raça Holandesa. Os rebanhos foram agrupados de acordo com o nível de produção (baixo, médio e alto e avaliados na escala não transformada, raiz quadrada e logarítmica. Os componentes de variância foram estimados pelo método de máxima verossimilhança restrita. O modelo animal incluiu os efeitos fixos de rebanho-ano-estação e das covariáveis duração da lactação (efeito linear e idade da vaca ao parto (efeito linear e quadrático e os efeitos aleatórios genético aditivo direto, de ambiente permanente e residual. Na escala não transformada, todos os componentes de variância foram heterogêneos entre os três níveis de produção. Nesta escala, a variância residual e a fenotípica estavam associadas positivamente com o nível de produção enquanto que na escala logarítmica a associação foi negativa. A heterogeneidade da variância fenotípica e de seus componentes afetou mais as estimativas de herdabilidade que as da repetibilidade. A eficiência do processo de seleção para produção de leite poderá ser afetada pelo nível de produção em que forem estimados os parâmetros genéticos.It was evaluated the heterogeneity of components of phenotypic variance and its effects on the heritability and repeatability estimates for milk yield in Holstein cattle. The herds were grouped according to their level of production (low, medium and high and evaluated in the non-transformed, square-root and logarithmic scale. Variance components were estimated using a restricted maximum likelihood method based on an animal model that included fixed effects of herd-year-season, and as covariates the linear effect of lactation duration and the linear and quadratic effects of cow's age at calving and the random direct additive genetic, permanent environment and residual effects. In the
Neal, Benjamin P; Lin, Tsung-Han; Winter, Rivah N; Treibitz, Tali; Beijbom, Oscar; Kriegman, David; Kline, David I; Greg Mitchell, B
2015-08-01
Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies.
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
Zeeshan Ali Siddiqui
2016-01-01
Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.
Restricted Variance Interaction Effects
Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.
2018-01-01
Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Empirical projection-based basis-component decomposition method
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Surface modification method for reactor incore structural component
Obata, Minoru; Sudo, Akira.
1996-01-01
A large number of metal or ceramic small spheres accelerated by pressurized air are collided against a surface of a reactor incore structures or a welded surface of the structural components, and then finishing is applied by polishing to form compression stresses on the surface. This can change residual stresses into compressive stress without increasing the strength of the surface. Accordingly, stress corrosion crackings of the incore structural components or welded portions thereof can be prevented thereby enabling to extend the working life of equipments. (T.M.)
Probabilistic methods in nuclear power plant component ageing analysis
Simola, K.
1992-03-01
The nuclear power plant ageing research is aimed to ensure that the plant safety and reliability are maintained at a desired level through the designed, and possibly extended lifetime. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time- dependent decrease in reliability. The results of analyses can be used in the evaluation of the remaining lifetime of components and in the development of preventive maintenance, testing and replacement programmes. The report discusses the use of probabilistic models in the evaluations of the ageing of nuclear power plant components. The principles of nuclear power plant ageing studies are described and examples of ageing management programmes in foreign countries are given. The use of time-dependent probabilistic models to evaluate the ageing of various components and structures is described and the application of models is demonstrated with two case studies. In the case study of motor- operated closing valves the analysis are based on failure data obtained from a power plant. In the second example, the environmentally assisted crack growth is modelled with a computer code developed in United States, and the applicability of the model is evaluated on the basis of operating experience
Methods for microwave heat treatment of manufactured components
Ripley, Edward B.
2010-08-03
An apparatus for heat treating manufactured components using microwave energy and microwave susceptor material. Heat treating medium such as eutectic salts may be employed. A fluidized bed introduces process gases which may include carburizing or nitriding gases. The process may be operated in a batch mode or continuous process mode. A microwave heating probe may be used to restart a frozen eutectic salt bath.
Variance estimation for generalized Cavalieri estimators
Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen
2011-01-01
The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.
Local variances in biomonitoring
Wolterbeek, H.Th; Verburg, T.G.
2001-01-01
The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)
Local variances in biomonitoring
Wolterbeek, H.T.
1999-01-01
The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)
Analysis methods for structure reliability of piping components
Schimpfke, T.; Grebner, H.; Sievers, J.
2004-01-01
In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour (BMWA) GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The long-term objective of this development is to provide failure probabilities of passive components for probabilistic safety analysis of nuclear power plants. Up to now the code can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents some of the results of a benchmark analysis in the frame of the European project NURBIM (Nuclear Risk Based Inspection Methodologies for Passive Components). (orig.)
Sharma, D; Badano, A [Division of Imaging, Diagnostics and Software Reliability, OSEL/CDRH, Food & Drug Administration, MD (United States); Sempau, J [Technical University of Catalonia, Barcelona (Spain)
2016-06-15
Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. The weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.
Validation of consistency of Mendelian sampling variance.
Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H
2018-03-01
Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic
Spectral Ambiguity of Allan Variance
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Adaptive ACMS: A robust localized Approximated Component Mode Synthesis Method
Madureira, Alexandre L.; Sarkis, Marcus
2017-01-01
We consider finite element methods of multiscale type to approximate solutions for two-dimensional symmetric elliptic partial differential equations with heterogeneous $L^\\infty$ coefficients. The methods are of Galerkin type and follows the Variational Multiscale and Localized Orthogonal Decomposition--LOD approaches in the sense that it decouples spaces into multiscale and fine subspaces. In a first method, the multiscale basis functions are obtained by mapping coarse basis functions, based...
Novel method for detecting the hadronic component of extensive air showers
Gromushkin, D. M.; Volchenko, V. I.; Petrukhin, A. A.; Stenkin, Yu. V.; Stepanov, V. I.; Shchegolev, O. B.; Yashin, I. I.
2015-01-01
A novel method for studying the hadronic component of extensive air showers (EAS) is proposed. The method is based on recording thermal neutrons accompanying EAS with en-detectors that are sensitive to two EAS components: an electromagnetic (e) component and a hadron component in the form of neutrons (n). In contrast to hadron calorimeters used in some arrays, the proposed method makes it possible to record the hadronic component over the whole area of the array. The efficiency of a prototype array that consists of 32 en-detectors was tested for a long time, and some parameters of the neutron EAS component were determined
New methods for the characterization of pyrocarbon; The two component model of pyrocarbon
Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.
1972-04-19
In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.
A method for evaluation the activity of the reactor components
Gugiu, E.D.; Roth, Cs.
2003-01-01
The ability to predict the radioactivity levels of the reactor components is an important aspect from waste management point of view, as well as from radioprotection purposes. A special case is represented by the research reactors where, one of the major contributions to the radioactivity inventory is due to the experimental devices involved in various research works during reactor life. Generally, aluminum and aluminum alloys are used in manufacturing these devices; as a result, the work presented in this paper is focused on the qualitative and quantitative analysis of the radioactive isotopes contained in these materials. A device used for silicon doping by neutron transmutation that was placed near TRIGA reactor core is investigated. The isotopic content of various samplings drawn from various points of the device was analyzed by gamma spectrometry using a HPGe detector. Computations, using the MCNP5 code, are also performed in order to evaluate the reaction rates for all the isotopes and their reactions. The Monte Carlo simulations are performed for a detailed geometry and material composition of the reactor core and the device. The Origen-S code is also used in order to evaluate the isotopic inventory and the activity values. A detailed analysis regarding the possibility to estimate by computations and/or by gamma spectrometry the activity values of the isotopes which are of interest for decommissioning is presented in the paper. (authors)
Optical Methods For Automatic Rating Of Engine Test Components
Pritchard, James R.; Moss, Brian C.
1989-03-01
In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.
An efficient method for facial component detection in thermal images
Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen
2015-04-01
A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.
Methods and apparatus for radially compliant component mounting
Bulman, David Edward [Cincinnati, OH; Darkins, Jr., Toby George; Stumpf, James Anthony [Columbus, IN; Schroder, Mark S [Greenville, SC; Lipinski, John Joseph [Simpsonville, SC
2012-03-27
Methods and apparatus for a mounting assembly for a liner of a gas turbine engine combustor are provided. The combustor includes a combustor liner and a radially outer annular flow sleeve. The mounting assembly includes an inner ring surrounding a radially outer surface of the liner and including a plurality of axially extending fingers. The mounting assembly also includes a radially outer ring coupled to the inner ring through a plurality of spacers that extend radially from a radially outer surface of the inner ring to the outer ring.
Baixia Zhang
2016-01-01
Full Text Available Identification of bioactive components is an important area of research in traditional Chinese medicine (TCM formula. The reported identification methods only consider the interaction between the components and the target proteins, which is not sufficient to explain the influence of TCM on the gene expression. Here, we propose the Initial Transcription Process-based Identification (ITPI method for the discovery of bioactive components that influence transcription factors (TFs. In this method, genome-wide chip detection technology was used to identify differentially expressed genes (DEGs. The TFs of DEGs were derived from GeneCards. The components influencing the TFs were derived from STITCH. The bioactive components in the formula were identified by evaluating the molecular similarity between the components in formula and the components that influence the TF of DEGs. Using the formula of Tian-Zhu-San (TZS as an example, the reliability and limitation of ITPI were examined and 16 bioactive components that influence TFs were identified.
Development on methods for evaluating structure reliability of piping components
Schimpfke, T.; Grebner, H.; Peschke, J.; Sievers, J.
2003-01-01
In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour, GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The development is based on the experience achieved with applications of the public available US code PRAISE 3.10 (Piping Reliability Analysis Including Seismic Events), which was supplemented by additional features regarding the statistical evaluation and the crack orientation. PROST is designed to be more flexible to changes and supplementations. Up to now it can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents a parametric study on the influence by changing the method of stress intensity factor and limit load calculation and the statistical evaluation options on the leak probability of an exemplary pipe with postulated axial crack distribution. Furthermore the resulting leak probability of an exemplary pipe with postulated circumferential crack distribution is compared with the results of the modified PRAISE computer program. The intention of this investigation is to show trends. Therefore the resulting absolute values for probabilities should not be considered as realistic evaluations. (author)
Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun
2015-01-01
Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...
The Variance Composition of Firm Growth Rates
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Filatov, Michael; Zou, Wenli; Cremer, Dieter
2013-07-01
A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.
Nau, Andreas; Scholtes, B.
2014-01-01
Residual stresses can be result in both detrimental as well as beneficial consequences on the component's strength and lifetime. A most detailed knowledge of the residual stress state is a pre-requisite for the assessment of the component's performance. The mechanical methods for residual stress measurements are classified in non-destructive, destructive and semi-destructive methods. The two commonly used (semi-destructive) mechanical methods are the hole drilling and the ring core method. In the context of reactor safety research of the Federal Ministry of Economic Affairs and Energy (BMWi), two fundamental and interacting weak points of the hole drilling as well as of the ring core method are investigated. On the one hand, there are effects concerning geometrical boundary conditions of the components and on the other hand, there are influences of plasticity due to notch effects. Both aspects affect the released strain field, when the material is removed and finally, the calculated residual stresses. The first issue mentioned above is under the responsibility of Institute of Materials Engineering - Metallic Materials (Kassel University) and the last one will be investigated by University of Stuttgart-Otto-Graf-Institut - materials testing institute. Within the framework of this project it could be demonstrated that updated calibration coefficients lead to more reliable residual stress calculation in contrast to existing ones. These findings are valid for points of measurements on components without geometrical boundary effects like edges or shoulders. Reasons are high developed Finite-Element software packages and the opportunity of modelling the point of measurement (hole geometry, layout of the strain gauges) and its vicinity more in detail. Special challenges are multi-axial residual stress depth distributions and the geometry of components composing edges and claddings. Unlike existing analyses considering uni-axial and homogeneous stress states, bi
Decomposition of variance in terms of conditional means
Alessandro Figà Talamanca
2013-05-01
Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..
A general mixed boundary model reduction method for component mode synthesis
Voormeeren, S.N.; Van der Valk, P.L.C.; Rixen, D.J.
2010-01-01
A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the “Mixed
Concept of a new method for fatigue monitoring of nuclear power plant components
Zafosnik, M.; Cizelj, L.
2007-01-01
Fatigue is one of the well-understood aging mechanisms affecting mechanical components in many industrial facilities including nuclear power plants. Operational experience of nuclear power plants worldwide to date confirmed adequate design of safety related components against fatigue. In some cases however, for example when the plant life extension is envisioned, it may be very useful to monitor the remaining fatigue life of safety related components. Nuclear power plants components are classified into safety classes regarding their importance in mitigating the consequences of hypothetic accidents. Service life of components subjected to fatigue loading can be estimated with Usage Factor uk. A concept of the new method aiming both at monitoring the current state of the component and predicting its remaining lifetime in the life-extension conditions is presented. The method is based on determination of partial Usage Factor of components in which operating transients will be considered and compared to design transients. (author)
A virtual component method in numerical computation of cascades for isotope separation
Zeng Shi; Cheng Lu
2014-01-01
The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)
Driss Sarsri
2014-05-01
Full Text Available In this paper, we propose a method to calculate the first two moments (mean and variance of the structural dynamics response of a structure with uncertain variables and subjected to random excitation. For this, Newmark method is used to transform the equation of motion of the structure into a quasistatic equilibrium equation in the time domain. The Neumann development method was coupled with Monte Carlo simulations to calculate the statistical values of the random response. The use of modal synthesis methods can reduce the dimensions of the model before integration of the equation of motion. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
A novel method for detecting second harmonic ultrasonic components generated from fastened bolts
Fukuda, Makoto; Imano, Kazuhiko
2012-09-01
This study examines the use of ultrasonic second harmonic components in the quality control of bolt-fastened structures. An improved method for detecting the second harmonic components, from a bolt fastened with a nut, using the transmission method is constructed. A hexagon head iron bolt (12-mm diameter and 25-mm long) was used in the experiments. The bolt was fastened using a digital torque wrench. The second harmonic component increased by approximately 20 dB before and after the bolt was fastened. The sources of second harmonic components were contact acoustic nonlinearity in the screw thread interfaces of the bolt-nut and were the plastic deformation in the bolt with fastening bolt. This result was improved by approximately 10 dB compared with previous our method. Consequently, usefulness of the novel method for detecting second harmonic ultrasonic components generated from fastened bolt was confirmed.
Improvement of extraction method of coagulation active components from Moringa oleifera seed
Okuda, Tetsuji; Baes, Aloysius U.; Nishijima, Wataru; Okada, Mitsumasa
1999-01-01
A new method for the extraction of the active coagulation component from Moringa oleifera seeds was developed and compared with the ordinary water extraction method (MOC–DW). In the new method, 1.0 mol l-1 solution of sodium chloride (MOC–SC) and other salts were used for extraction of the active coagulation component. Batch coagulation experiments were conducted using 500 ml of low turbid water (50 NTU). Coagulation efficiencies were evaluated based on the dosage required to remove kaolinite...
Research on criticality analysis method of CNC machine tools components under fault rate correlation
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Method of forming a ceramic matrix composite and a ceramic matrix component
de Diego, Peter; Zhang, James
2017-05-30
A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.
A simple component-connection method for building binary decision diagrams encoding a fault tree
Way, Y.-S.; Hsia, D.-Y.
2000-01-01
A simple new method for building binary decision diagrams (BDDs) encoding a fault tree (FT) is provided in this study. We first decompose the FT into FT-components. Each of them is a single descendant (SD) gate-sequence. Following the node-connection rule, the BDD-component encoding an SD FT-component can each be found to be an SD node-sequence. By successively connecting the BDD-components one by one, the BDD for the entire FT is thus obtained. During the node-connection and component-connection, reduction rules might need to be applied. An example FT is used throughout the article to explain the procedure step by step. Our method proposed is a hybrid one for FT analysis. Some algorithms or techniques used in the conventional FT analysis or the newer BDD approach may be applied to our case; our ideas mentioned in the article might be referred by the two methods
Rising, M. E.; Prinja, A. K. [Univ. of New Mexico, Dept. of Chemical and Nuclear Engineering, Albuquerque, NM 87131 (United States)
2012-07-01
A critical neutron transport problem with random material properties is introduced. The total cross section and the average neutron multiplicity are assumed to be uncertain, characterized by the mean and variance with a log-normal distribution. The average neutron multiplicity and the total cross section are assumed to be uncorrected and the material properties for differing materials are also assumed to be uncorrected. The principal component analysis method is used to decompose the covariance matrix into eigenvalues and eigenvectors and then 'realizations' of the material properties can be computed. A simple Monte Carlo brute force sampling of the decomposed covariance matrix is employed to obtain a benchmark result for each test problem. In order to save computational time and to characterize the moments and probability density function of the multiplication factor the polynomial chaos expansion method is employed along with the stochastic collocation method. A Gauss-Hermite quadrature set is convolved into a multidimensional tensor product quadrature set and is successfully used to compute the polynomial chaos expansion coefficients of the multiplication factor. Finally, for a particular critical fuel pin assembly the appropriate number of random variables and polynomial expansion order are investigated. (authors)
Review of seismic tests for qualification of components and validation of methods
Buland, P.; Gantenbein, F.; Gibert, R.J.; Hoffmann, A.; Queval, J.C.
1988-01-01
Seismic tests are performed in CEA-DEMT since many years in order: to demonstrate the qualification of components, to give an experimental validation of calculation methods used for seismic design of components. The paper presents examples of these two types of tests, a description of the existing facilities and details about the new facility TAMARIS under construction. (author)
2015-01-01
and thereby the resulting inner structure of the component 1 is arranged in a controlled and reproducible manner. The sacrificial material 2 and possibly also the component material 3 may e.g. be arranged by use of a 3D-printer or manually. The method may e.g. be used to manufacture a three...
Review of seismic tests for qualification of components and validation of methods
Buland, P; Gantenbein, F; Gibert, R J; Hoffmann, A; Queval, J C [CEA-CEN SACLAY-DEMT, Gif sur Yvette-Cedex (France)
1988-07-01
Seismic tests are performed in CEA-DEMT since many years in order: to demonstrate the qualification of components, to give an experimental validation of calculation methods used for seismic design of components. The paper presents examples of these two types of tests, a description of the existing facilities and details about the new facility TAMARIS under construction. (author)
Monaghan, Philip Harold; Delvaux, John McConnell; Taxacher, Glenn Curtis
2015-06-09
A pre-form CMC cavity and method of forming pre-form CMC cavity for a ceramic matrix component includes providing a mandrel, applying a base ply to the mandrel, laying-up at least one CMC ply on the base ply, removing the mandrel, and densifying the base ply and the at least one CMC ply. The remaining densified base ply and at least one CMC ply form a ceramic matrix component having a desired geometry and a cavity formed therein. Also provided is a method of forming a CMC component.
NDE of stresses in thick-walled components by ultrasonic methods
Goebbels, K.; Pitsch, H.; Schneider, E.; Nowack, H.
1985-01-01
The possibilty of measuring stresses - especially residual stresses - by ultrasonic methods has been presented at the 4th and 5th International Conference on NDE in Nuclear Industry. This contribution now presents results of several applications to thick walled components such as turbines and generators for power plants. The measurement technique using linearly polarized shear waves allows one to characterize the homogeneitry of the residual stress situation along and around cylindrically shaped components. Some important results show that the stress distribution integrated over the cross section of the component has not followed in any case the simple relations derived by stress analysts. Conclusions referring to the stress situation inside the components are discussed
Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components
Stubbs, Jack
2000-01-01
Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...
Methods for designing building envelope components prepared for repair and maintenance
Rudbeck, Claus Christian
2000-01-01
the deterministic and probabilistic approach. Based on an investigation of the data-requirement, user-friendliness and supposed accuracy (the accuracy of the different methods has not been evaluated due to the absence of field data) the method which combines the deterministic factor method with statistical...... to be prepared for repair and maintenance. Both of these components are insulation systems for flat roofs and low slope roofs; components where repair or replacement is very expensive if the roofing material fails in its function. The principle of both roofing insulation systems is that the insulation can...... of issues which are specified below:Further development of methods for designing building envelope components prepared for repair and maintenance, and ways of tracking and predicting performance through time once the components have been designed, implemented in a building design and built...
V. E. Strizhius
2015-01-01
Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2017-04-01
A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).
Saedtler, E.
1981-01-01
The method for controlling the vibrating behaviour of primary circuit components or for a general systems control is a combination of methods of the statistic systems theory, optimum filter theory, statistic decision theory and of the pattern recognition method. It is appropriate for automatic control of complex systems and stochastic events. (DG) [de
Methods of Si based ceramic components volatilization control in a gas turbine engine
Garcia-Crespo, Andres Jose; Delvaux, John; Dion Ouellet, Noemie
2016-09-06
A method of controlling volatilization of silicon based components in a gas turbine engine includes measuring, estimating and/or predicting a variable related to operation of the gas turbine engine; correlating the variable to determine an amount of silicon to control volatilization of the silicon based components in the gas turbine engine; and injecting silicon into the gas turbine engine to control volatilization of the silicon based components. A gas turbine with a compressor, combustion system, turbine section and silicon injection system may be controlled by a controller that implements the control method.
Methods of producing epoxides from alkenes using a two-component catalyst system
Kung, Mayfair C.; Kung, Harold H.; Jiang, Jian
2013-07-09
Methods for the epoxidation of alkenes are provided. The methods include the steps of exposing the alkene to a two-component catalyst system in an aqueous solution in the presence of carbon monoxide and molecular oxygen under conditions in which the alkene is epoxidized. The two-component catalyst system comprises a first catalyst that generates peroxides or peroxy intermediates during oxidation of CO with molecular oxygen and a second catalyst that catalyzes the epoxidation of the alkene using the peroxides or peroxy intermediates. A catalyst system composed of particles of suspended gold and titanium silicalite is one example of a suitable two-component catalyst system.
Development of computational methods of design by analysis for pressure vessel components
Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin
2005-01-01
Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)
Jovan Putranda
2016-09-01
Full Text Available Water quality monitoring is prone to encounter error on its recording or measuring process. The monitoring on river water quality not only aims to recognize the water quality dynamic, but also to evaluate the data to create river management policy and water pollution in order to maintain the continuity of human health or sanitation requirement, and biodiversity preservation. Evaluation on water quality monitoring needs to be started by identifying the important water quality parameter. This research objected to identify the significant parameters by using two transformation or standardization methods on water quality data, which are the river Water Quality Index, WQI (Indeks Kualitas Air, Sungai, IKAs transformation or standardization method and transformation or standardization method with mean 0 and variance 1; so that the variability of water quality parameters could be aggregated with one another. Both of the methods were applied on the water quality monitoring data which its validity and reliability have been tested. The PCA, Principal Component Analysis (Analisa Komponen Utama, AKU, with the help of Scilab software, has been used to process the secondary data on water quality parameters of Gadjah Wong river in 2004-2013, with its validity and reliability has been tested. The Scilab result was cross examined with the result from the Excel-based Biplot Add In software. The research result showed that only 18 from total 35 water quality parameters that have passable data quality. The two transformation or standardization data methods gave different significant parameter type and amount result. On the transformation or standardization mean 0 variances 1, there were water quality significant parameter dynamic to mean concentration of each water quality parameters, which are TDS, SO4, EC, TSS, NO3N, COD, BOD5, Grease Oil and NH3N. On the river WQI transformation or standardization, the water quality significant parameter showed the level of
Saccenti, E.; Camacho, J.
2015-01-01
Principal component analysis is one of the most commonly used multivariate tools to describe and summarize data. Determining the optimal number of components in a principal component model is a fundamental problem in many fields of application. In this paper we compare the performance of several
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.
Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil
2011-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.
New component-based normalization method to correct PET system models
Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki
2011-01-01
Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)
Eduard, W
1996-09-01
Exposure to micro-organisms can be measured by different methods. Traditionally, viable methods and light microscopy have been used for detection of micro-organisms. Most viable methods measure micro-organisms that are able to grow in culture, and these methods are also common for the identification of micro-organisms. More recently, non-viable methods have been developed for the measurement of bioaerosol components originating from micro-organisms that are based on microscopic techniques, bioassays, immunoassays and chemical methods. These methods are important for the assessment of exposure to bioaerosols in work environments as non-infectious micro-organisms and microbial components may cause allergic and toxic reactions independent of viability. It is not clear to what extent micro-organisms should be identified because exposure-response data are limited and many different micro-organisms and microbial components may cause similar health effects. Viable methods have also been used in indoor environments for the detection of specific organisms as markers of indoor growth of micro-organisms. At present, the validity of measurement methods can only be assessed by comparative laboratory and field studies because standard materials of microbial bioaerosol components are not available. Systematic errors may occur especially when results obtained by different methods are compared. Differences between laboratories that use the same methods may also occur as quality assurance schemes of analytical methods for bioaerosol components do not exist. Measurement methods may also have poor precision, especially the viable methods. It therefore seems difficult to meet the criteria for accuracy of measurement methods of workplace exposure that have recently been adopted by the CEN. Risk assessment is limited by the lack of generally accepted reference values or guidelines for microbial bioaerosol components. The cost of measurements of exposure to microbial bioaerosol components
A feeder protection method against the phase-phase fault using symmetrical components
Ciontea, Catalin-Iosif; Bak, Claus Leth; Blaabjerg, Frede
2017-01-01
generation and relatively reduced short-circuit currents, thus resembling the electric network on a ship. The simulation results demonstrate that the proposed method of protection provides an improved performance compared to the conventional OverCurrent relays in a radial feeder with variable short......The method of symmetrical components simplifies analysis of an electric circuit during the fault and represents an important tool for the protection engineers. In this paper, the symmetrical components of the fault current are used in a new feeder protection method for the maritime applications...
Variance function estimation for immunoassays
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
29 CFR 1926.405 - Wiring methods, components, and equipment for general use.
2010-07-01
... Electrical Installation Safety Requirements § 1926.405 Wiring methods, components, and equipment for general... lighting wiring methods which may be of a class less than would be required for a permanent installation... subpart for permanent wiring shall apply to temporary wiring installations. Temporary wiring shall be...
Harmonic Stability Analysis of Offshore Wind Farm with Component Connection Method
Hou, Peng; Ebrahimzadeh, Esmaeil; Wang, Xiongfei
2017-01-01
In this paper, an eigenvalue-based harmonic stability analysis method for offshore wind farm is proposed. Considering the internal cable connection layout, a component connection method (CCM) is adopted to divide the system into individual blocks as current controller of converters, LCL filters...
Method to map individual electromagnetic field components inside a photonic crystal
Denis, T.; Reijnders, B.; Lee, J.H.H.; van der Slot, Petrus J.M.; Vos, Willem L.; Boller, Klaus J.
2012-01-01
We present a method to map the absolute electromagnetic field strength inside photonic crystals. We apply the method to map the dominant electric field component Ez of a two-dimensional photonic crystal slab at microwave frequencies. The slab is placed between two mirrors to select Bloch standing
Method for bonding a thermoplastic polymer to a thermosetting polymer component
Van Tooren, M.J.L.
2012-01-01
The invention relates to a method for bonding a thermoplastic polymer to a thermosetting polymer component, the thermoplastic polymer having a melting temperature that exceeds the curing temperature of the thermosetting polymer. The method comprises the steps of providing a cured thermosetting
Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis
Xiaoming Xu
2017-01-01
Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.
Variance decomposition-based sensitivity analysis via neural networks
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M
2017-01-01
Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
Hao, Liang; Wu, Dapeng; Guan, Yafeng
2014-09-01
The determination of organic composition in atmospheric particulate matter (PM) is of great importance in understanding how PM affects human health, environment, climate, and ecosystem. Organic components are also the scientific basis for emission source tracking, PM regulation and risk management. Therefore, the molecular characterization of the organic fraction of PM has become one of the priority research issues in the field of environmental analysis. Due to the extreme complexity of PM samples, chromatographic methods have been the chief selection. The common procedure for the analysis of organic components in PM includes several steps: sample collection on the fiber filters, sample preparation (transform the sample into a form suitable for chromatographic analysis), analysis by chromatographic methods. Among these steps, the sample preparation methods will largely determine the throughput and the data quality. Solvent extraction methods followed by sample pretreatment (e. g. pre-separation, derivatization, pre-concentration) have long been used for PM sample analysis, and thermal desorption methods have also mainly focused on the non-polar organic component analysis in PM. In this paper, the sample preparation methods prior to chromatographic analysis of organic components in PM are reviewed comprehensively, and the corresponding merits and limitations of each method are also briefly discussed.
Pengyu Gao
2016-03-01
Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.
Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow
Kou, Jisheng
2017-12-06
In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.
Kuramoto, R.Y.R.Renato Yoichi Ribeiro.; Appoloni, Carlos Roberto
2002-01-01
The two media method permits the application of Beer's law (Thesis (Master Degree), Universidade Estadual de Londrina, PR, Brazil, pp. 23) for the linear attenuation coefficient determination of irregular thickness samples by gamma-ray transmission. However, the use of this methodology introduces experimental complexity due to the great number of variables to be measured. As consequence of this complexity, the uncertainties associated with each of these variables may be correlated. In this paper, we examine the covariance terms in the uncertainty propagation, and quantify the correlation among the uncertainties of each of the variables in question
Guoqing An
2017-01-01
Full Text Available Nowadays, stator current analysis used for detecting the incipient fault in squirrel cage motor has received much attention. However, in the case of interturn short circuit in stator, the traditional symmetrical component method has lost the precondition due to the harmonics and noise; the negative sequence component (NSC is hard to be obtained accurately. For broken rotor bars, the new added fault feature blanked by fundamental component is also difficult to be discriminated in the current spectrum. To solve the above problems, a fundamental component extraction (FCE method is proposed in this paper. On one hand, via the antisynchronous speed coordinate (ASC transformation, NSC of extracted signals is transformed into the DC value. The amplitude of synthetic vector of NSC is used to evaluate the severity of stator fault. On the other hand, the extracted fundamental component can be filtered out to make the rotor fault feature emerge from the stator current spectrum. Experiment results indicate that this method is feasible and effective in both interturn short circuit and broken rotor bars fault diagnosis. Furthermore, only stator currents and voltage frequency are needed to be recorded, and this method is easy to implement.
The n-component cubic model and flows: subgraph break-collapse method
Essam, J.W.; Magalhaes, A.C.N. de.
1988-01-01
We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt
Determination of biogenic component in waste and liquid fuels by the 14C method
Krajcar Bronić, Ines; Barešić, Jadranka; Horvatinčić, Nada
2015-01-01
Intensive use of fossil fuels for energy production and transport during 20th century caused an increase of CO2 concentration in the atmosphere. The increase of CO2 concentration can be slowed down by the use of biogenic materials for energy production and/or transport. One of the method for determination of the fraction of the biogenic component in any type of fuel or waste is the 14C method, which is based on different content of 14C in biogenic and in fossil component: while the biogenic c...
Detailed finite element method modeling of evaporating multi-component droplets
Diddens, Christian, E-mail: C.Diddens@tue.nl
2017-07-01
The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.
Computed tomography (CT) as a nondestructive test method used for composite helicopter components
Oster, Reinhold
1991-09-01
The first components of primary helicopter structures to be made of glass fiber reinforced plastics were the main and tail rotor blades of the Bo105 and BK 117 helicopters. These blades are now successfully produced in series. New developments in rotor components, e.g., the rotor blade technology of the Bo108 and PAH2 programs, make use of very complex fiber reinforced structures to achieve simplicity and strength. Computer tomography was found to be an outstanding nondestructive test method for examining the internal structure of components. A CT scanner generates x-ray attenuation measurements which are used to produce computer reconstructed images of any desired part of an object. The system images a range of flaws in composites in a number of views and planes. Several CT investigations and their results are reported taking composite helicopter components as an example.
Seichter, Johannes; Reese, Sven H.; Klucke, Dietmar
2012-01-01
During the last years environmental effects on the fatigue behavior of nuclear power plant components has worldwide been discussed controversial with respect to the transferability of laboratory data on real components. A publication from Argonne National Laboratory on experimental results concerning environmental effects (air and LWR coolant) on fatigue of austenitic steels included a proposal on calculation methods concerning the lifetime reduction due to environmental effects. This calculation method, i.e. multiplication of the usage factor by a F(en), has been included into the ASME Code, Section III, Division I, as Code Case N-792 (fatigue evaluations including environmental effects). The presented contribution evaluates the practical application of this calculation procedure and demonstrates the determination of the usage factor of an austenitic component under environmental exposure.
Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.
2015-03-01
In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).
Revision: Variance Inflation in Regression
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Smerdova, Olga; Graham, Richard S; Gasser, Urs; Hutchings, Lian R; De Focatiis, Davide S A
2014-05-01
A new method is presented for the extraction of single-chain form factors and interchain interference functions from a range of small-angle neutron scattering (SANS) experiments on bimodal homopolymer blends. The method requires a minimum of three blends, made up of hydrogenated and deuterated components with matched degree of polymerization at two different chain lengths, but with carefully varying deuteration levels. The method is validated through an experimental study on polystyrene homopolymer bimodal blends with [Formula: see text]. By fitting Debye functions to the structure factors, it is shown that there is good agreement between the molar mass of the components obtained from SANS and from chromatography. The extraction method also enables, for the first time, interchain scattering functions to be produced for scattering between chains of different lengths. [Formula: see text].
An Empirical Temperature Variance Source Model in Heated Jets
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
2010-07-01
... temporary wiring installations. (i) Temporary electrical power and lighting installations of 600 volts... project or purpose for which the wiring was installed. (iii) Temporary electrical installations of more... 29 Labor 5 2010-07-01 2010-07-01 false Wiring methods, components, and equipment for general use...
System reliability with correlated components: Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
System reliability with correlated components : Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
Gennadij V. Ganin
2011-05-01
Full Text Available This article is related to new integrated approach to objective computerizing evaluation of cognitive-component which delays the latent period of the sensor-motor reaction on specific visual stimuli, which carried different semantic information. It is recommended to use this method for clinical diagnostic of pathologies associated with disorders of cognitive human activity and for assessment of mental fatigue.
A general mixed boundary model reduction method for component mode synthesis
Voormeeren, S N; Van der Valk, P L C; Rixen, D J
2010-01-01
A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.
Reich, M.; Esztergar, E.P.; Ellison, E.G.; Erdogan, F.; Gray, T.G.F.; Wells, C.W.
1977-03-01
A survey and review program for application of fracture mechanics methods in elevated temperature design and safety analysis has been initiated in December of 1976. This is the first of a series of reports, the aim of which is to provide a critical review of the theories of fracture and the application of fracture mechanics methods to life prediction, reliability and safety analysis of piping components in nuclear plants undergoing sub-creep and elevated temperature service conditions
Development of strength evaluation method for high-pressure ceramic components
Takegami, Hiroaki, E-mail: takegami.hiroaki@jaea.go.jp; Terada, Atsuhiko; Inagaki, Yoshiyuki
2014-05-01
Japan Atomic Energy Agency is conducting R and D on nuclear hydrogen production by the Iodine-Sulfur (IS) process. Since highly corrosive materials such as sulfuric and hydriodic acids are used in the IS process, it is very important to develop components made of corrosion resistant materials. Therefore, we have been developing a sulfuric acid decomposer made of a ceramic material, that is, silicon carbide (SiC), which shows excellent corrosion resistance to sulfuric acid. One of the key technological challenges for the practical use of a ceramic sulfuric acid decomposer made of SiC is to be licensed in accordance with the High Pressure Gas Safety Act for high-pressure operations of the IS process. Since the strength of a ceramic material depends on its geometric form, etc., the strength evaluation method required for a pressure design is not established. Therefore, we propose a novel strength evaluation method for SiC structures based on the effective volume theory in order to extend the range of application of the effective volume. We also developed a design method for ceramic apparatus with the strength evaluation method in order to obtain a license in accordance with the High Pressure Gas Safety Act. In this paper, the minimum strength of SiC components was calculated by Monte Carlo simulation, and the minimum strength evaluation method of SiC components was developed by using the results of simulation. The method was confirmed by fracture test of tube model and reference data.
Modelling volatility by variance decomposition
Amado, Cristina; Teräsvirta, Timo
In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
Seo JunHyeok
2016-12-01
Full Text Available PL (product liability response system is an enterprise-wide system that prevents company’s financial loss due to PL-related accidents. Existing researches on PL response system are mainly focused on preventive and/or defense strategies for the companies. Also, it is obvious that each industry has their original characteristics related on PL issues. It means industry-specific characteristics should be considered to adopt PL response strategies. Thus, this paper aims to discuss industry-specific PL response system and their components. Based on prior researches, we tried to reveal the possibility of its application to manufacturing companies of existing PL response strategies using Delphi method with PL experts. Based on first round results, we tried to classify existing PL strategies of manufacturing companies into several categories. To validate our suggestion for essential components of PL response system, second round Delphi method are applied. Analytic hierarchy process (AHP technique will be applied to identify a prioritized list of each components and strategies. Existing PL response strategies could be categorized with six components – strategy, technology, investment, training, awareness, and organization. Among six components, Technology – it represents the technology needed for improving the safety of all products – is the most important components to prepare PL accidents. The limitation of this paper is on the size of survey and variety of examples. However, the future study will enhance the potential of the proposed method. Regardless of rich research efforts to identify PL response strategies, there is no effort to categorize these strategies and prioritized them. Well-coordinated and actionable PL response strategies and their priorities could help small-and-medium sized enterprise (SME to develop their own PL response system with their limited resources.
The Improved Methods of Critical Component Classification for the SSCs of New NPP
Lee, Sang Dae; Yeom, Dong Un; Hyun, Jin Woo
2010-01-01
Functional Importance Determination (FID) process classifies the components of a plant into four groups: Critical A, Critical B, Minor and No Impact. The output of FID can be used as the decision-making tool for maintenance work priority and the input data for preventive maintenance implementation. FID applied to new Nuclear Power Plant (NPP) can be accomplished by utilizing the function analysis results and safety significance determination results of Maintenance Rule (MR) program. Using Shin-Kori NPP as an example, this paper proposes the advanced critical component classification methods for FID utilizing MR scoping results
Feynman variance-to-mean method
Dowdy, E.J.; Hansen, G.E.; Robba, A.A.
1985-01-01
The Feynman and other fluctuation techniques have been shown to be useful for determining the multiplication of subcritical systems. The moments of the counting distribution from neutron detectors is analyzed to yield the multiplication value. The authors present the methodology and some selected applications and results and comparisons with Monte Carlo calculations
Dawidowicz, Andrzej L; Czapczyńska, Natalia B
2011-11-01
Essential oils are one of nature's most precious gifts with surprisingly potent and outstanding properties. Coniferous oils, for instance, are nowadays being used extensively to treat or prevent many types of infections, modify immune responses, soothe inflammations, stabilize moods, and to help ease all forms of non-acute pain. Given the broad spectrum of usage of coniferous essential oils, a fast, safe, simple, and efficient sample-preparation method is needed in the estimation procedure of essential oil components in fresh plant material. Generally, the time- and energy-consuming steam distillation (SD) is applied for this purpose. This paper will compare SD, pressurized liquid extraction (PLE), matrix solid-phase dispersion (MSPD), and the sea sand disruption method (SSDM) as isolation techniques to obtain aroma components from Scots pine (Pinus sylvestris), spruce (Picea abies), and Douglas fir (Pseudotsuga menziesii). According to the obtained data, SSDM is the most efficient sample preparation method in determining the essential oil composition of conifers. Moreover, SSDM requires small organic solvent amounts and a short extraction time, which makes it an advantageous alternative procedure for the routine analysis of coniferous oils. The superiority of SSDM over MSPD efficiency is ascertained, as there are no chemical interactions between the plant cell components and the sand. This fact confirms the reliability and efficacy of SSDM for the analysis of volatile oil components. Copyright © 2011 Verlag Helvetica Chimica Acta AG, Zürich.
Zhisheng Xie
2013-01-01
Full Text Available Volatile components from Exocarpium Citri Grandis (ECG were, respectively, extracted by three methods, that is, steam distillation (SD, headspace solid-phase microextraction (HS-SPME, and solvent extraction (SE. A total of 81 compounds were identified by gas chromatography-mass spectrometry including 77 (SD, 56 (HS-SPME, and 48 (SE compounds, respectively. Despite of the extraction method, terpenes (39.98~57.81% were the main volatile components of ECG, mainly germacrene-D, limonene, 2,6,8,10,14-hexadecapentaene, 2,6,11,15-tetramethyl-, (E,E,E-, and trans-caryophyllene. Comparison was made among the three methods in terms of extraction profile and property. SD relatively gave an entire profile of volatile in ECG by long-time extraction; SE enabled the analysis of low volatility and high molecular weight compounds but lost some volatiles components; HS-SPME generated satisfactory extraction efficiency and gave similar results to those of SD at analytical level when consuming less sample amount, shorter extraction time, and simpler procedure. Although SD and SE were treated as traditionally preparative extractive techniques for volatiles in both small batches and large scale, HS-SPME coupled with GC/MS could be useful and appropriative for the rapid extraction and qualitative analysis of volatile components from medicinal plants at analytical level.
Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate
Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng
2005-01-01
In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.
Kang, Won-Hee; Kliese, Alyce
2014-01-01
Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations
Evaluation of functional scintigraphy of gastric emptying by the principal component method
Haeussler, M.; Eilles, C.; Reiners, C.; Moll, E.; Boerner, W.
1980-10-01
Gastric emptying of a standard semifluid test-meal, labeled with /sup 99/sup(m)Tc-DTPA, was studied by functional scintigraphy in 88 subjects (normals, patients with duodenal and gastric ulcer before and after selective proximal vagotomy with and without pyloroplasty). Gastric emptying curves were analysed by the method of principal components. Patients after selective proximal vagotomy with pyloroplasty showed an rapid initial emptying, whereas this was a rare finding in patients after selective proximal vagotomy without pyloroplasty. The method of principal components is well suited for mathematical analysis of gastric emptying; nevertheless the results are difficult to interpret. The method has advantages when looking at larger collectives and allows a separation into groups with different gastric emptying.
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-01
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature
Yu. S. Barash
2015-06-01
Full Text Available Purpose. In the scientific paper a methodical approach concerning determining the infrastructure component of the costs for traffic of the particular passenger train should be developed. It takes into account the individual characteristics of the particular train traffic. Methodology. To achieve the research purposes was used a method which is based on apportionment of expenses for the traffic of a particular passenger train taking into account the factors affecting the magnitude of costs. This methodology allows allocating properly infrastructure costs for a particular train and, consequently, to determine the accurate profitability of each train. Findings. All expenditures relating to passenger traffic of a long distance were allocated from first cost of passenger and freight traffic. These costs are divided into four components. Three groups of expenses were allocated in infrastructure component, which are calculated according to the certain principle taking into account the individual characteristics of the particular train traffic. Originality. The allocation method of all passenger transportation costs of all Ukrzaliznytsia departments for a particular passenger train was improved. It is based on principles of general indicators formation of each department costs, which correspond to the main influential factors of operating trains. The methodical approach to determining the cost of infrastructure component is improved, which takes into account the effect of the speed and weight of a passenger train on the wear of the railway track superstructure and contact network. All this allows allocating to reasonably the costs of particular passenger train traffic and to determine its profitability. Practical value. Implementing these methods allows calculating the real, economically justified costs of a particular train that will correctly determine the profitability of a particular passenger train and on this basis it allows to make management
Genetic factors explain half of all variance in serum eosinophil cationic protein
Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie
2014-01-01
with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....
Syahrial Syahrial
2010-06-01
Full Text Available An isolation and identification of volatile components in temps for 2, 5 and 8 days fermentation by simultaneous distillation-extraction method was carried out. Simultaneous distillation-extraction apparatus was modified by Muchalal from the basic Likens-Nickerson's design. Steam distillation and benzena as an extraction solvent was used in this system. The isolation was continuously carried out for 3 hours which maximum water temperature In the Liebig condenser was 8 °C. The extract was concentrated by freeze concentration method, and the volatile components were analyzed and identified by combined gas chromatography-mass spectrophotometry (GC-MS. The Muchalal's simultaneous distillation extraction apparatus have some disadvantage in cold finger condenser, and it's extractor did not have condenser. At least 47, 13 and 5 volatile components were found in 2, 5 and 8 days fermentation, respectively. The volatile components in the 2 days fermentation were nonalal, ɑ-pinene, 2,4-decadienal, 5-phenyldecane, 5-phenylundecane, 4-phenylundecane, 5-phenyldodecane, 4-phenyldodecane, 3-phenyldodecane, 2-phenyldodecane, 5-phenyltridecane, and caryophyllene; in the 5 days fermentation were nonalal, caryophyllene, 4-phenylundecane, 5-phenyldodecane, 4-phenyldodecane, 3-phenyldodecane, 2-phenyldodecane; and in the 8 days fermentation were ethenyl butanoic, 2-methy1-3-(methylethenylciclohexyl etanoic and 3,7-dimethyl-5-octenyl etanoic.
Evaluation of in-core measurements by means of principal components method
Makai, M.; Temesvari, E.
1996-01-01
Surveillance of a nuclear reactor core comprehends determination of assemblies' three-dimensional (3D) power distribution. Derived from other assemblies' measured values, power of non-measured assembly is calculated for every assembly with the help of principal components method (PCM) which is also presented. The measured values are interpolated for different geometrical coverings of the WWER-440 core. Different procedures have been elaborated and investigated, among them the most successful methods are discussed. Each method offers self consistent means to determine numerical errors of the interpolated values. (author). 13 refs, 7 figs, 2 tabs
Shao, Xueguang; Yu, Zhengliang; Ma, Chaoxiong
2004-06-01
An improved method is proposed for the quantitative determination of multicomponent overlapping chromatograms based on a known transmutation method. To overcome the main limitation of the transmutation method caused by the oscillation generated in the transmutation process, two techniques--wavelet transform smoothing and the cubic spline interpolation for reducing data points--were adopted, and a new criterion was also developed. By using the proposed algorithm, the oscillation can be suppressed effectively, and quantitative determination of the components in both the simulated and experimental overlapping chromatograms is successfully obtained.
Method of forming components for a high-temperature secondary electrochemical cell
Mrazek, Franklin C.; Battles, James E.
1983-01-01
A method of forming a component for a high-temperature secondary electrochemical cell having a positive electrode including a sulfide selected from the group consisting of iron sulfides, nickel sulfides, copper sulfides and cobalt sulfides, a negative electrode including an alloy of aluminum and an electrically insulating porous separator between said electrodes. The improvement comprises forming a slurry of solid particles dispersed in a liquid electrolyte such as the lithium chloride-potassium chloride eutetic, casting the slurry into a form having the shape of one of the components and smoothing the exposed surface of the slurry, cooling the cast slurry to form the solid component, and removing same. Electrodes and separators can be thus formed.
A Component-Based Modeling and Validation Method for PLC Systems
Rui Wang
2014-05-01
Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.
Baleine, Erwan; Sheldon, Danny M
2014-06-10
Method and system for calibrating a thermal radiance map of a turbine component in a combustion environment. At least one spot (18) of material is disposed on a surface of the component. An infrared (IR) imager (14) is arranged so that the spot is within a field of view of the imager to acquire imaging data of the spot. A processor (30) is configured to process the imaging data to generate a sequence of images as a temperature of the combustion environment is increased. A monitor (42, 44) may be coupled to the processor to monitor the sequence of images of to determine an occurrence of a physical change of the spot as the temperature is increased. A calibration module (46) may be configured to assign a first temperature value to the surface of the turbine component when the occurrence of the physical change of the spot is determined.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are
An Assessment of Remote Visual Methods to Detect Cracking in Reactor Components
Cumblidge, Stephen E.; Anderson, Michael T.; Doctor, Steven R.; Simonen, Fredric A.; Elliot, Anthony J.
2008-01-01
Recently, the U.S. nuclear industry has proposed replacing current volumetric and/or surface examinations of certain components in commercial nuclear power plants, as required by the American Society of Mechanical Engineers Boiler and Pressure Vessel Code Section XI, Inservice Inspection of Nuclear Power Plant Components, with a simpler visual testing (VT) method. The advantages of VT are that these tests generally involve much less radiation exposure and time to perform the examination than do volumetric examinations such as ultrasonic testing. The issues relative to the reliability of VT in determining the structural integrity of reactor components were examined. Some piping and pressure vessel components in a nuclear power station are examined using VT as they are either in high radiation fields or component geometry precludes the use of ultrasonic testing (UT) methodology. Remote VT with radiation-hardened video systems has been used by nuclear utilities to find cracks in pressure vessel cladding in pressurized water reactors, core shrouds in boiling water reactors, and to investigate leaks in piping and reactor components. These visual tests are performed using a wide variety of procedures and equipment. The techniques for remote VT use submersible closed-circuit video cameras to examine reactor components and welds. PNNL conducted a parametric study that examined the important variables influencing the effectiveness of a remote visual test. Tested variables included lighting techniques, camera resolution, camera movement, and magnification. PNNL also conducted a limited laboratory test using a commercial visual testing camera system to experimentally determine the ability of the camera system to detect cracks of various widths under ideal conditions. The results of these studies and their implications are presented in this paper
Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.
Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita
2012-06-01
A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.
Kroeger, J.
1988-01-01
The invention concerns a method for the determination of the radiation activity of contaminated constructional components by means of detectors. The measured values are processed and displayed by a computer. From the given parameters the current and error-oriented threshold contamination or threshold impulse rate is determined continuously with the expiration of the gate time and is then compared with the actual measured value of the specific surface activity or mass-specific activity or impulse rate. (orig.) [de
The ORC method. Effective modelling of thermal performance of multilayer building components
Akander, Jan
2000-02-01
The ORC Method (Optimised RC-networks) provides a means of modelling one- or multidimensional heat transfer in building components, in this context within building simulation environments. The methodology is shown, primarily applied to heat transfer in multilayer building components. For multilayer building components, the analytical thermal performance is known, given layer thickness and material properties. The aim of the ORC Method is to optimise the values of the thermal resistances and heat capacities of an RC-model such as to give model performance a good agreement with the analytical performance, for a wide range of frequencies. The optimisation procedure is made in the frequency domain, where the over-all deviation between model and analytical frequency response, in terms of admittance and dynamic transmittance, is minimised. It is shown that ORC's are effective in terms of accuracy and computational time in comparison to finite difference models when used in building simulations, in this case with IDA/ICE. An ORC configuration of five mass nodes has been found to model building components in Nordic countries well, within the application of thermal comfort and energy requirement simulations. Simple RC-networks, such as the surface heat capacity and the simple R-C-configuration are not appropriate for detailed building simulation. However, these can be used as basis for defining the effective heat capacity of a building component. An approximate method is suggested on how to determine the effective heat capacity without the use of complex numbers. This entity can be calculated on basis of layer thickness and material properties with the help of two time constants. The approximate method can give inaccuracies corresponding to 20%. In-situ measurements have been carried out in an experimental building with the purpose of establishing the effective heat capacity of external building components that are subjected to normal thermal conditions. The auxiliary
Variance based OFDM frame synchronization
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Li, Yang; Pirvu, Traian A
2011-01-01
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
Borawski Andrzej
2016-09-01
Full Text Available The braking system is one of the most important systems in any vehicle. Its proper functioning may determine the health and life the people inside the vehicle as well as other road users. Therefore, it is important that the parameters which characterise the functioning of brakes changed as little as possible throughout their lifespan. Multiple instances of heating and cooling of the working components of the brake system as well as the environment they work in may impact their tribological properties. This article describes a method of evaluating the coefficient of friction and the wear speed of abrasive wear of friction working components of brakes. The methodology was developed on the basis of Taguchi’s method of process optimization.
An estimation method of system failure frequency using both structure and component failure data
Takaragi, Kazuo; Sasaki, Ryoichi; Shingai, Sadanori; Tominaga, Kenji
1981-01-01
In recent years, the importance of reliability analysis is appreciated for large systems such as nuclear power plants. A reliability analysis method is described for a whole system, using structure failure data for its main working subsystem and component failure data for its safety protection subsystem. The subsystem named main working system operates normally, and the subsystem named safety protection system acts as standby or protection. Thus the main and the protection systems are given mutually different failure data; then, between the subsystems, there exists common mode failure, i.e. the component failure affecting the reliability of both two. A calculation formula for sytem failure frequency is first derived. Then, a calculation method with digraphs is proposed for conditional system failure probability. Finally the results of numerical calculation are given for the purpose of explanation. (J.P.N.)
State of the Art in Life Assessment for High Temperature Components Using Replication Method
Kim, Duck Hee; Choi, Hyun Sun
2010-01-01
The power generation and chemical industry have been subjected to further material degradation with long term operations and need to predict the remaining service life of components, such as reformer tube and turbine rotor, that have operated at elevated temperatures. As a non-destructive technique, replication method with reliable metallurgical life and microstructural soundness assessment has been recognized with strongly useful method until now. Developments of this method have variously accomplished by new quantitative approach, such as carbide analysis, with A-parameter and grain deformation method. An overview of replication, some new techniques for material degradation and life assessment were introduced in this paper. Also, on-site applications and its reasonableness were described. As a result of having analyzed microstructure by replication method, carbide approach was quantitatively useful to life assessment
Monika eFleischhauer
2013-09-01
Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to
Grahić Jasmin
2013-01-01
Full Text Available In order to analyze morphological characteristics of locally cultivated common bean landraces from Bosnia and Herzegovina (B&H, thirteen quantitative and qualitative traits of 40 P. vulgaris accessions, collected from four geographical regions (Northwest B&H, Northeast B&H, Central B&H and Sarajevo and maintained at the Gene bank of the Faculty of Agriculture and Food Sciences in Sarajevo, were examined. Principal component analysis (PCA showed that the proportion of variance retained in the first two principal components was 54.35%. The first principal component had high contributing factor loadings from seed width, seed height and seed weight, whilst the second principal component had high contributing factor loadings from the analyzed traits seed per pod and pod length. PCA plot, based on the first two principal components, displayed a high level of variability among the analyzed material. The discriminant analysis of principal components (DAPC created 3 discriminant functions (DF, whereby the first two discriminant functions accounted for 90.4% of the variance retained. Based on the retained DFs, DAPC provided group membership probabilities which showed that 70% of the accessions examined were correctly classified between the geographically defined groups. Based on the taxonomic distance, 40 common bean accessions analyzed in this study formed two major clusters, whereas two accessions Acc304 and Acc307 didn’t group in any of those. Acc360 and Acc362, as well as Acc324 and Acc371 displayed a high level of similarity and are probably the same landrace. The present diversity of Bosnia and Herzegovina’s common been landraces could be useful in future breeding programs.
A Method for Evaluating Information Security Governance (ISG) Components in Banking Environment
Ula, M.; Ula, M.; Fuadi, W.
2017-02-01
As modern banking increasingly relies on the internet and computer technologies to operate their businesses and market interactions, the threats and security breaches have highly increased in recent years. Insider and outsider attacks have caused global businesses lost trillions of Dollars a year. Therefore, that is a need for a proper framework to govern the information security in the banking system. The aim of this research is to propose and design an enhanced method to evaluate information security governance (ISG) implementation in banking environment. This research examines and compares the elements from the commonly used information security governance frameworks, standards and best practices. Their strength and weakness are considered in its approaches. The initial framework for governing the information security in banking system was constructed from document review. The framework was categorized into three levels which are Governance level, Managerial level, and technical level. The study further conducts an online survey for banking security professionals to get their professional judgment about the ISG most critical components and the importance for each ISG component that should be implemented in banking environment. Data from the survey was used to construct a mathematical model for ISG evaluation, component importance data used as weighting coefficient for the related component in the mathematical model. The research further develops a method for evaluating ISG implementation in banking based on the mathematical model. The proposed method was tested through real bank case study in an Indonesian local bank. The study evidently proves that the proposed method has sufficient coverage of ISG in banking environment and effectively evaluates the ISG implementation in banking environment.
A method, device and application for the dynamic balancing of a rotating component
Voinis, P.
1995-01-01
The dynamic balancing method is based on the detection of the vibrations generated by an unbalance; two satellites are then displaced in order to create a counter-unbalance and their position is measured. Their position is then adjusted so as the unbalance and counter-unbalance phases and intensities differences are inferior to predetermined reference values in order to balance dynamically the rotating component. Application to superpower turbogenerator shafting systems. 4 fig
1986-01-01
The 12 papers discuss topics of strength and safety in the field of materials technology and engineering. Conclusions for NPP component safety and materials are drawn. Measurements and studies relate to fracture mechanics methods (oscillation, burst, material strength, characteristics). The dynamic analysis of the behaviour of large test specimens, the influence of load velocity on crack resistance curve and the development of forged parts from austenitic steel for fast breeder reactors are presented. (DG) [de
1986-01-01
24 papers discuss various methods for nondestructive testing of materials, e.g. eddy current measurement, EMAG analyser, tomography, ultrasound, holographic interferometry, and optical sound field camera. Special consideration is given to mathematical programmes and tests allowing to determine fracture-mechanical parameters and to assess cracks in various components, system parts and individual specimens both in pressurized systems and NPP systems. Studies focus on weld seams and adjacent areas. (DG) [de
Creep/fatigue damage prediction of fast reactor components using shakedown methods
Buckthorpe, D.E.
1997-01-01
The present status of the shakedown method is reviewed, the application of the shakedown based principles to complex hardening and creep behaviour is described and justified and the prediction of damage against design criteria outlined. Comparisons are made with full inelastic analysis solutions where these are available and against damage assessments using elastic and inelastic design code methods. Current and future developments of the method are described including a summary of the advances made in the development of the post process ADAPT, which has enabled the method to be applied to complex geometry features and loading cases. The paper includes a review of applications of the method to typical Fast Reactor structural example cases within the primary and secondary circuits. For the primary circuit this includes structures such as the large diameter internal shells which are surrounded by hot sodium and subject to slow and rapid thermal transient loadings. One specific case is the damage assessment associated with thermal stratifications within sodium and the effects of moving sodium surfaces arising from reactor trip conditions. Other structures covered are geometric features within components such as the Above Core structure and Intermediate Heat Exchanger. For the secondary circuit the method has been applied to alternative and more complex forms of geometry namely thick section tubeplates of the Steam Generator and a typical secondary circuit piping run. Both of these applications are in an early stage of development but are expected to show significant advantages with respect to creep and fatigue damage estimation compared with existing code methods. The principle application of the method to design has so far been focused on Austenitic Stainless steel components however current work shows some significant benefits may be possible from the application of the method to structures made from Ferritic steels such as Modified 9Cr 1Mo. This aspect is briefly
Discussion on variance reduction technique for shielding
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Shirley, Natalie R; Ramirez Montes, Paula Andrea
2015-01-01
The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.
Xuewu Zhang
2013-01-01
Full Text Available This paper proposed a new method for surface defect detection of photovoltaic module based on independent component analysis (ICA reconstruction algorithm. Firstly, a faultless image is used as the training image. The demixing matrix and corresponding ICs are obtained by applying the ICA in the training image. Then we reorder the ICs according to the range values and reform the de-mixing matrix. Then the reformed de-mixing matrix is used to reconstruct the defect image. The resulting image can remove the background structures and enhance the local anomalies. Experimental results have shown that the proposed method can effectively detect the presence of defects in periodically patterned surfaces.
A method for evaluating the funding of components of natural resource and conservation projects
Wellington, John F., E-mail: welling@ipfw.edu [Indiana University – Purdue University Fort Wayne (IPFW), Doermer School of Business, 203 Stonegate Drive Erie, PA 16505 (United States); Lewis, Stephen A., E-mail: lewis.sa07@gmail.com [Mongrel Works, LLC., Columbus, OH 43209 (United States)
2016-02-15
Many public and private entities such as government agencies and private foundations have missions related to the improvement, protection, and sustainability of the environment. In pursuit of their missions, they fund projects with related outcomes. Typically, the funding scene consists of scarce funding dollars for the many project requests. In light of funding limitations and funder's search for innovative funding schemes, a method to support the allocation of scarce dollars among project components is presented. The proposed scheme has similarities to methods in the project selection literature but differs in its focus on project components and its connection to and enumeration of the universe of funding possibilities. The value of having access to the universe is demonstrated with illustrations. The presentation includes Excel implementations that should appeal to a broad spectrum of project evaluators and reviewers. Access to the space of funding possibilities facilitates a rich analysis of funding alternatives. - Highlights: • Method is given for allocating scarce funding dollars among competing projects. • Allocations are made to fund parts of projects • Proposed method provides access to the universe of funding possibilities. • Proposed method facilitates a rich analysis of funding possibilities. • Excel spreadsheet implementations are provided.
A method for uncertainty quantification in the life prediction of gas turbine components
Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)
1998-12-31
A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.
A method for evaluating the funding of components of natural resource and conservation projects
Wellington, John F.; Lewis, Stephen A.
2016-01-01
Many public and private entities such as government agencies and private foundations have missions related to the improvement, protection, and sustainability of the environment. In pursuit of their missions, they fund projects with related outcomes. Typically, the funding scene consists of scarce funding dollars for the many project requests. In light of funding limitations and funder's search for innovative funding schemes, a method to support the allocation of scarce dollars among project components is presented. The proposed scheme has similarities to methods in the project selection literature but differs in its focus on project components and its connection to and enumeration of the universe of funding possibilities. The value of having access to the universe is demonstrated with illustrations. The presentation includes Excel implementations that should appeal to a broad spectrum of project evaluators and reviewers. Access to the space of funding possibilities facilitates a rich analysis of funding alternatives. - Highlights: • Method is given for allocating scarce funding dollars among competing projects. • Allocations are made to fund parts of projects • Proposed method provides access to the universe of funding possibilities. • Proposed method facilitates a rich analysis of funding possibilities. • Excel spreadsheet implementations are provided
(Co) variance Components and Genetic Parameter Estimates for Re
Mapula
The magnitude of heritability estimates obtained in the current study ... traits were recently introduced to supplement progeny testing programmes or for usage as sole source of ..... VCE-5 User's Guide and Reference Manual Version 5.1.
Variance components and genetic parameters for body weight and ...
model included a direct as well as a maternal additive genetic effect, while only the direct additive genetic eff'ect had a sig- .... deviations from the log likelihood value obtained under the ... (1995).lt would therefore be fair to assume that a.
Genetic variance of sunflower yield components - Heliantus annuus L.
Hladni Nada
2003-01-01
Full Text Available The main goals of sunflower breeding in Yugoslavia and abroad are increased seed yield and oil content per unit area and increased resistance to diseases, insects and stress conditions via an optimization of plant architecture. In order to determine the mode of inheritance, gene effects and correlations of total leaf number per plant, total leaf area and plant height, six genetically divergent inbred lines of sunflower were subjected to half diallel crosses. Significant differences in mean values of all the traits were found in the F1 and F2 generations. Additive gene effects were more important in the inheritance of total leaf number per plant and plant height, while in the case of total leaf area per plant the nonadditive ones were more important looking at all the combinations in the F1 and F2 generations. The average degree of dominance (Hi/D1/2 was lower than one for total leaf number per plant and plant height, so the mode of inheritance was partial dominance, while with total leaf area the value was higher than one, indicating super dominance as the mode of inheritance. Significant positive correlation was found: between total leaf area per plant and total leaf number per plant (0.285* and plant height (0.278*. The results of the study are of importance for further sunflower breeding work.
A method for independent component graph analysis of resting-state fMRI
de Paula, Demetrius Ribeiro; Ziegler, Erik; Abeyasinghe, Pubuditha M.
2017-01-01
Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non-contiguou......Introduction Independent component analysis (ICA) has been extensively used for reducing task-free BOLD fMRI recordings into spatial maps and their associated time-courses. The spatially identified independent components can be considered as intrinsic connectivity networks (ICNs) of non......-contiguous regions. To date, the spatial patterns of the networks have been analyzed with techniques developed for volumetric data. Objective Here, we detail a graph building technique that allows these ICNs to be analyzed with graph theory. Methods First, ICA was performed at the single-subject level in 15 healthy...... parcellated regions. Third, between-node functional connectivity was established by building edge weights for each networks. Group-level graph analysis was finally performed for each network and compared to the classical network. Results Network graph comparison between the classically constructed network...
Hori, Hajime; Ishidao, Toru; Ishimatsu, Sumiyo
2010-12-01
We measured vapor concentrations continuously evaporated from two-component organic solvents in a reservoir and proposed a method to estimate and predict the evaporation rate or generated vapor concentrations. Two kinds of organic solvents were put into a small reservoir made of glass (3 cm in diameter and 3 cm high) that was installed in a cylindrical glass vessel (10 cm in diameter and 15 cm high). Air was introduced into the glass vessel at a flow rate of 150 ml/min, and the generated vapor concentrations were intermittently monitored for up to 5 hours with a gas chromatograph equipped with a flame ionization detector. The solvent systems tested in this study were the methanoltoluene system and the ethyl acetate-toluene system. The vapor concentrations of the more volatile component, that is, methanol in the methanol-toluene system and ethyl acetate in the ethyl acetate-toluene system, were high at first, and then decreased with time. On the other hand, the concentrations of the less volatile component were low at first, and then increased with time. A model for estimating multicomponent organic vapor concentrations was developed, based on a theory of vapor-liquid equilibria and a theory of the mass transfer rate, and estimated values were compared with experimental ones. The estimated vapor concentrations were in relatively good agreement with the experimental ones. The results suggest that changes in concentrations of two-component organic vapors continuously evaporating from a liquid reservoir can be estimated by the proposed model.
He, A.; Quan, C.
2018-04-01
The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
Liyun Zhuang
2017-01-01
Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.
Zhuang, Liyun; Guan, Yepeng
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.
Confidence Interval Approximation For Treatment Variance In ...
In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...
Temperature variance study in Monte-Carlo photon transport theory
Giorla, J.
1985-10-01
We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr
State of the art seismic analysis for CANDU reactor structure components using condensation method
Soliman, S A; Ibraham, A M; Hodgson, S [Atomic Energy of Canada Ltd., Saskatoon, SK (Canada)
1996-12-31
The reactor structure assembly seismic analysis is a relatively complex process because of the intricate geometry with many different discontinuities, and due to the hydraulic attached mass which follows the structure during its vibration. In order to simulate reasonably accurate behaviour of the reactor structure assembly, detailed finite element models are generated and used for both modal and stress analysis. Guyan reduction condensation method was used in the analysis. The attached mass, which includes the fluid mass contained in the components plus the added mass which accounts for the inertia of the surrounding fluid entrained by the accelerating structure immersed in the fluid, was calculated and attached to the vibrating structures. The masses of the attached components, supported partly or totally by the assembly which includes piping, reactivity control units, end fittings, etc. are also considered in the analysis. (author). 4 refs., 6 tabs., 4 figs.
Köhler, Michael; Pötter, Kurt; Zenner, Harald
2017-01-01
Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...
Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2017-01-01
The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.
The Purification Method of Matching Points Based on Principal Component Analysis
DONG Yang
2017-02-01
Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.
Measuring multiple residual-stress components using the contour method and multiple cuts
Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO
2009-01-01
The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.
Harding, D.J.; Collins, J.P.; Kobliska, G.R.; Chester, N.S.; Pewitt, E.G.; Fowler, W.B.
1993-01-01
Fermilab has adopted the Source Evaluation Board (SEB) method for procuring certain major technical components of the Fermilab Main Injector. The SEB procedure is designed to ensure the efficient and effective expenditure of Government funds at the same time that it optimizes the opportunity for attainment of project objectives. A qualitative trade-off is allowed between price and technical factors. The process involves a large amount of work and is only justified for a very limited number of procurements. Fermilab has gained experience with the SEB process in awarding subcontracts for major subassemblies of the Fermilab Main Injector dipoles
Methods of assessing the leak-before-break behaviour of pressurized components
Goerner, F.
1984-01-01
A general overview of the parameters is first given, which are important for the stress and service life of a pressurized component. The individual parameters are discussed, where the main points are the calculation of stress intensity factors, the fatigue behaviour and the calculation of plastic limiting loads and elastic-plastic failure factors (COD and J integral), using the Dugdale model. In a final chapter, the leak-before-break diagrams are given and compared for different methods of calculation for pipes with longitudinal and circumferential cracks and for flat plates. (orig./HP) [de
Reifman, J.
1997-01-01
A comprehensive survey of computer-based systems that apply artificial intelligence methods to detect and identify component faults in nuclear power plants is presented. Classification criteria are established that categorize artificial intelligence diagnostic systems according to the types of computing approaches used (e.g., computing tools, computer languages, and shell and simulation programs), the types of methodologies employed (e.g., types of knowledge, reasoning and inference mechanisms, and diagnostic approach), and the scope of the system. The major issues of process diagnostics and computer-based diagnostic systems are identified and cross-correlated with the various categories used for classification. Ninety-five publications are reviewed
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion
A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods
Luo, Guangchun; Qin, Ke
2014-01-01
Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565
THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD
HRISTIAN Liliana
2017-05-01
Full Text Available The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA. There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A solution to this problem can be the application of a method of factorial analysis, the so-called Principal Component Analysis, with the final goal of establishing and analyzing those variables which influence in a significant manner the internal structure of combed wool fabrics according to armire type. By applying PCA it is obtained a small number of the linear combinations (principal components from a set of variables, describing the internal structure of the fabrics, which can hold as much information as possible from the original variables. Data analysis is an important initial step in decision making, allowing identification of the causes that lead to a decision- making situations. Thus it is the action of transforming the initial data in order to extract useful information and to facilitate reaching the conclusions. The process of data analysis can be defined as a sequence of steps aimed at formulating hypotheses, collecting primary information and validation, the construction of the mathematical model describing this phenomenon and reaching these conclusions about the behavior of this model.
Minimum Variance Portfolios in the Brazilian Equity Market
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
Some methods of analysis and diagnostics of corroded components from nuclear power plant
Mogosan, S.; Radulescu, M.; Fulger, M.; Stefanescu, D.
2010-01-01
In Nuclear Power Plants (NPP) it is necessary to ensure a longer and safe operation as difficult and expensive it is the maintenance of these very complex installations and equipment. In this regard, The Analysis and Diagnostic Laboratory Corroded Metal Components in Nuclear Facilities-LADICON; was authorized RENAR and CNCAN (National Commission for Nuclear Activities Control) notified as a testing laboratory for nuclear-grade materials. As part of the investigation and evaluation of corrosion behavior for these materials two types of test methods are used i.e. longer corrosion tests such as: autoclaving at high temperature and pressure in different chemical media-specific patterns in NPP and accelerated methods like: electrochemical techniques, accelerated chemical tests, etc. This paper presents some methods of analysis for materials corrosion; methods of assessment of corrosion of structural materials exposed to specific operating conditions and environment in NPPs. The electrochemical measurements show the following advantages: a) Allowing a direct method to accelerate the corrosion processes without altering the environment, b) It can be used as an nondestructive tool for assessing the rate of corrosion and c) Offers the possibility of conducting such investigations in - situ and ex- situ. Corroborating the environmental chemistry that was born on samples movies investigation results obtained by the methods above, it is possible to identify the types of corrosion of the materials and sometimes even those processes and mechanisms of corrosion. (authors)
Beyond the Mean: Sensitivities of the Variance of Population Growth.
Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad
2013-03-01
Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.
A. Bhushan
2015-07-01
Full Text Available In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.
A possible method of carbon deposit mapping on plasma facing components using infrared thermography
Mitteau, R.; Spruytte, J.; Vallet, S.; Travere, J.M.; Guilhem, D.; Brosset, C.
2007-01-01
The material eroded from the surface of plasma facing components is redeposited partly close to high heat flux areas. At these locations, the deposit is heated by the plasma and the deposition pattern evolves depending on the operation parameters. The mapping of the deposit is still a matter of intense scientific activity, especially during the course of experimental campaigns. A method based on the comparison of surface temperature maps, obtained in situ by infrared cameras and by theoretical modelling is proposed. The difference between the two is attributed to the thermal resistance added by deposited material, and expressed as a deposit thickness. The method benefits of elaborated imaging techniques such as possibility theory and fuzzy logics. The results are consistent with deposit maps obtained by visual inspection during shutdowns
Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe
2015-11-01
We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.
Lattice Boltzmann method for multi-component, non-continuum mass diffusion
Joshi, Abhijit S; Peracchio, Aldo A; Grew, Kyle N; Chiu, Wilson K S
2007-01-01
Recently, there has been a great deal of interest in extending the lattice Boltzmann method (LBM) to model transport phenomena in the non-continuum regime. Most of these studies have focused on single-component flows through simple geometries. This work examines an ad hoc extension of a recently developed LBM model for multi-component mass diffusion (Joshi et al 2007 J. Phys. D: Appl. Phys. 40 2961) to model mass diffusion in the non-continuum regime. In order to validate the method, LBM results for ternary diffusion in a two-dimensional channel are compared with predictions of the dusty gas model (DGM) over a range of Knudsen numbers. A calibration factor based on the DGM is used in the LBM to correlate Knudsen diffusivity to pore size. Results indicate that the LBM can be a useful tool for predicting non-continuum mass diffusion (Kn > 0.001), but additional research is needed to extend the range of applicability of the algorithm for a larger parameter space. Guidelines are given on using the methodology described in this work to model non-continuum mass transport in more complex geometries where the DGM is not easily applicable. In addition, the non-continuum LBM methodology can be extended to three-dimensions. An envisioned application of this technique is to model non-continuum mass transport in porous solid oxide fuel cell electrodes
An optimized ensemble local mean decomposition method for fault detection of mechanical components
Zhang, Chao; Chen, Shuai; Wang, Jianguo; Li, Zhixiong; Hu, Chao; Zhang, Xiaogang
2017-01-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error ( Relative RMSE ) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE , corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions. (paper)
An optimized ensemble local mean decomposition method for fault detection of mechanical components
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
Method to eliminate flux linkage DC component in load transformer for static transfer switch.
He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.
Application of risk-based methods for inspection of nuclear power plant components
Balkey, K.R.
1992-01-01
In-service inspections (ISIs) can play a significant role in minimizing equipment and structural failures. All aspects of inspections, i.e., objectives, method, timing, and the acceptance criteria for detected flaws can affect the probability of component failure. Where ISI programs exist, they are primarily based on prior experience and engineering judgment. At best, some include an implicit consideration of risk (probability of failure multiplied by consequence). Since late 1988, a multidisciplined American Society of Mechanical Engineers (ASME) Research Task Force on Risk-Based Inspection Guidelines has been addressing the general question of how to formally incorporate risk considerations into plans and requirements for the ISI of components and structural systems. The task force and steering committee that guided the project have concluded that appropriate analytical methods exist for evaluating and quantifying risks associated with pressure boundary and structural failures. With the support of about a dozen industry and government organizations, the research group has recommended a general methodology for establishing a risk-based inspection program that could be applied to any nuclear system or structural system
A method for the preparation of a fuel, by the addition of one or more components to a base fuel
2013-01-01
The present invention relates to a method for the preparation of a fuel, by the addition of one or more components to a base fuel, wherein the method comprises the following steps: i) providing a base fuel; ii) withdrawing aromatic components from a styrene / propylene ox ide production plant; iii)
Rigge, Matthew B.; Gass, Leila; Homer, Collin G.; Xian, George Z.
2017-10-26
The National Land Cover Database (NLCD) provides thematic land cover and land cover change data at 30-meter spatial resolution for the United States. Although the NLCD is considered to be the leading thematic land cover/land use product and overall classification accuracy across the NLCD is high, performance and consistency in the vast shrub and grasslands of the Western United States is lower than desired. To address these issues and fulfill the needs of stakeholders requiring more accurate rangeland data, the USGS has developed a method to quantify these areas in terms of the continuous cover of several cover components. These components include the cover of shrub, sagebrush (Artemisia spp), big sagebrush (Artemisia tridentata spp.), herbaceous, annual herbaceous, litter, and bare ground, and shrub and sagebrush height. To produce maps of component cover, we collected field data that were then associated with spectral values in WorldView-2 and Landsat imagery using regression tree models. The current report outlines the procedures and results of converting these continuous cover components to three thematic NLCD classes: barren, shrubland, and grassland. To accomplish this, we developed a series of indices and conditional models using continuous cover of shrub, bare ground, herbaceous, and litter as inputs. The continuous cover data are currently available for two large regions in the Western United States. Accuracy of the “cross-walked” product was assessed relative to that of NLCD 2011 at independent validation points (n=787) across these two regions. Overall thematic accuracy of the “cross-walked” product was 0.70, compared to 0.63 for NLCD 2011. The kappa value was considerably higher for the “cross-walked” product at 0.41 compared to 0.28 for NLCD 2011. Accuracy was also evaluated relative to the values of training points (n=75,000) used in the development of the continuous cover components. Again, the “cross-walked” product outperformed NLCD
The use of principal component, discriminate and rough sets analysis methods of radiological data
Seddeek, M.K.; Kozae, A.M.; Sharshar, T.; Badran, H.M.
2006-01-01
In this work, computational methods of finding clusters of multivariate data points were explored using principal component analysis (PCA), discriminate analysis (DA) and rough set analysis (RSA) methods. The variables were the concentrations of four natural isotopes and the texture characteristics of 100 sand samples from the coast of North Sinai, Egypt. Beach and dune sands are the two types of samples included. These methods were used to reduce the dimensionality of multivariate data and as classification and clustering methods. The results showed that the classification of sands in the environment of North Sinai is dependent upon the radioactivity contents of the naturally occurring radioactive materials and not upon the characteristics of the sand. The application of DA enables the creation of a classification rule for sand type and it revealed that samples with high negatively values of the first score have the highest contamination of black sand. PCA revealed that radioactivity concentrations alone can be considered to predict the classification of other samples. The results of RSA showed that only one of the concentrations of 238 U, 226 Ra and 232 Th with 40 K content, can characterize the clusters together with characteristics of the sand. Both PCA and RSA result in the following conclusion: 238 U, 226 Ra and 232 Th behave similarly. RSA revealed that one/two of them may not be considered without affecting the body of knowledge
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Defect recognition in CFRP components using various NDT methods within a smart manufacturing process
Schumacher, David; Meyendorf, Norbert; Hakim, Issa; Ewert, Uwe
2018-04-01
The manufacturing process of carbon fiber reinforced polymer (CFRP) components is gaining a more and more significant role when looking at the increasing amount of CFRPs used in industries today. The monitoring of the manufacturing process and hence the reliability of the manufactured products, is one of the major challenges we need to face in the near future. Common defects which arise during manufacturing process are e.g. porosity and voids which may lead to delaminations during operation and under load. To find irregularities and classify them as possible defects in an early stage of the manufacturing process is of high importance for the safety and reliability of the finished products, as well as of significant impact from an economical point of view. In this study we compare various NDT methods which were applied to similar CFRP laminate samples in order to detect and characterize regions of defective volume. Besides ultrasound, thermography and eddy current, different X-ray methods like radiography, laminography and computed tomography are used to investigate the samples. These methods are compared with the intention to evaluate their capability to reliably detect and characterize defective volume. Beyond the detection and evaluation of defects, we also investigate possibilities to combine various NDT methods within a smart manufacturing process in which the decision which method shall be applied is inherent within the process. Is it possible to design an in-line or at-line testing process which can recognize defects reliably and reduce testing time and costs? This study aims to show up opportunities of designing a smart NDT process synchronized to the production based on the concepts of smart production (Industry 4.0). A set of defective CFRP laminate samples and different NDT methods were used to demonstrate how effective defects are recognized and how communication between interconnected NDT sensors and the manufacturing process could be organized.
Variance-based Salt Body Reconstruction
Ovcharenko, Oleg
2017-05-26
Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.
Heterogeneity of variance and its implications on dairy cattle breeding
Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...
Raju, P.P.
1980-05-01
This report summarizes the results of the study program to assess the benefits of nonlinear analysis methods in Light Water Reactor (LWR) component designs. The current study reveals that despite its increased cost and other complexities, nonlinear analysis is a practical and valuable tool for the design of LWR components, especially under ASME Level D service conditions (faulted conditions) and it will greatly assist in the evaluation of ductile fracture potential of pressure boundary components. Since the nonlinear behavior is generally a local phenomenon, the design of complex components can be accomplished through substructuring isolated localized regions and evaluating them in detail using nonlinear analysis methods
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
esfandiar Hassani Moghadam
2010-03-01
Full Text Available There is a few reported about the volatile oil component of petal, herbal and component of seed oil of borage. This research worked carried out for analysis and identification the volatile oil in herbals, petals, and seed oil compositions of Borago officinalis L. in Lorestan province. Material and methods: Extraction of essential oil from petals carried out using steam distillation by Clevenger apparatus. The new SPME-GC/MS method is used for extraction and identification of volatile oil compounds in the herbal of borage. The oil of the seeds was extracted using a Cold-press method. The identification of chemical composition of extracted oil was carried out by GC/MS apparatus. Results: In petals of Borage only Carvacerol component, and in the herbal of Borage three components Carvacrol, Bisabolone oxide and 2-Phenylethyl benzoate, extracted and identified respectively. In the seed oil of borage 16 different components were separated and identified. The following components had the highest amount in seed oil: Hexadecane, N, N-dimethylethanolamine, Beta-d-glycoside, 3, 6-glucurono-methyl, Benzaldehde, 4-methyl 3-Hydroxytetrahydrofuran, Hexadecanoic acid, Heptanoic acid, Gamma butyrolactone and Ethyl octadec-9-enoate are the major components respectively. These components contain 63.4% of all components in borage seed oil and the 7 residual components only 9.5% all of the components in borage seed oil. Also one unknown (27.1% component identified. Conclusion: Using result obtained from this research the volatile oil a few amounts of the borage chemical composition. The results show that the seed oil of this species can be used for medicinal preparation. Cold Press method was found to be rapid and simple for identification of seeds oil components.
James M. Cheverud
2007-03-01
Full Text Available Comparisons of covariance patterns are becoming more common as interest in the evolution of relationships between traits and in the evolutionary phenotypic diversification of clades have grown. We present parallel analyses of covariance matrix similarity for cranial traits in 14 New World Monkey genera using the Random Skewers (RS, T-statistics, and Common Principal Components (CPC approaches. We find that the CPC approach is very powerful in that with adequate sample sizes, it can be used to detect significant differences in matrix structure, even between matrices that are virtually identical in their evolutionary properties, as indicated by the RS results. We suggest that in many instances the assumption that population covariance matrices are identical be rejected out of hand. The more interesting and relevant question is, How similar are two covariance matrices with respect to their predicted evolutionary responses? This issue is addressed by the random skewers method described here.
Near-surface thermal characterization of plasma facing components using the 3-omega method
Dechaumphai, Edward; Barton, Joseph L.; Tesmer, Joseph R.; Moon, Jaeyun; Wang, Yongqiang; Tynan, George R.; Doerner, Russell P.; Chen, Renkun
2014-01-01
Near-surface regime plays an important role in thermal management of plasma facing components in fusion reactors. Here, we applied a technique referred to as the ‘3ω’ method to measure the thermal conductivity of near-surface regimes damaged by ion irradiation. By modulating the frequency of the heating current in a micro-fabricated heater strip, the technique enables the probing of near-surface thermal properties. The technique was applied to measure the thermal conductivity of a thin ion-irradiated layer on a tungsten substrate, which was found to decrease by nearly 60% relative to pristine tungsten for a Cu ion dosage of 0.2 dpa
Approximate zero-variance Monte Carlo estimation of Markovian unreliability
Delcoux, J.L.; Labeau, P.E.; Devooght, J.
1997-01-01
Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)
Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle
2017-10-01
Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.
Burdekin, F M
1988-12-31
This document deals with fracture mechanics methods used for the assessment of Light Water Reactor (LWR) components. The background to analysis methods using elastic plastic parameters is described. Several results obtained with these methods are presented as well as results of reliability analysis methods. (TEC). 27 refs.
Problems of variance reduction in the simulation of random variables
Lessi, O.
1987-01-01
The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced
Cumulative prospect theory and mean variance analysis. A rigorous comparison
Hens, Thorsten; Mayer, Janos
2012-01-01
We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...
Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry
2016-01-01
The bi-spectral method retrieves cloud optical thickness and cloud droplet effective radius simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VISNIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved and re. In the literature, the retrievals of and re are often assumed to be independent and considered separately when investigating the impact of sub-pixel cloud reflectance variations on the bi-spectral method. As a result, the impact on is contributed only by the sub-pixel variation of VISNIR band reflectance and the impact on re only by the sub-pixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VISNIR and SWIR cloud reflectances and their covariance on the and re retrievals. This framework takes into account the fact that the retrievals are determined by both VISNIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VISNIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used
Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, K.
2016-01-01
The bispectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VIS/NIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring subpixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In the literature, the retrievals of t and re are often assumed to be independent and considered separately when investigating the impact of subpixel cloud reflectance variations on the bispectral method. As a result, the impact on t is contributed only by the subpixel variation of VIS/NIR band reflectance and the impact on re only by the subpixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of subpixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how subpixel cloud reflectance variations impact the t and re retrievals based on the bispectral method. In particular, our framework provides a mathematical explanation of how the subpixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our
HEYDER DINIZ SILVA
2000-01-01
Full Text Available Estudou-se, no presente trabalho, a eficiência das seguintes alternativas de análise de experimentos realizados em látice quanto à precisão na estimação de componentes de variância, através da simulação computacional de dados: i análise intrablocos do látice com tratamentos ajustados (primeira análise; ii análise do látice em blocos casualizados completos (segunda análise; iii análise intrablocos do látice com tratamentos não-ajustados (terceira análise; iv análise do látice como blocos casualizados completos, utilizando as médias ajustadas dos tratamentos, obtidas a partir da análise com recuperação da informação interblocos, tendo como quadrado médio do resíduo a variância efetiva média dessa análise do látice (quarta análise. Os resultados obtidos mostram que se deve utilizar o modelo de análise intrablocos de experimentos em látice para se estimarem componentes de variância sempre que a eficiência relativa do delineamento em látice, em relação ao delineamento em Blocos Completos Casualizados, for superior a 100% e, em caso contrário, deve-se optar pelo modelo de análise em Blocos Casualizados Completos. A quarta alternativa de análise não deve ser recomendada em qualquer das duas situações.The efficiency of fur alternatives of analysis of experiments in square lattice, related to the estimation of variance components, was studied through computational simulation of data: i intrablock analysis of the lattice with adjusted treatments (first analysis; ii lattices analysis as a randomized complete blocks design (second analysis; iii; intrablock analysis of the lattice with non-adjusted treatments (third analysis; iv lattice analysis as a randomized complete blocks design, using the adjusted means of treatments, obtained through the analysis of lattice with recuperation of interblocks information, having as the residual mean square, the average effective variance of this same lattice analysis
Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow
Kou, Jisheng; Sun, Shuyu; Wang, Xiuhua
2017-01-01
involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass
Hu, Y.; Li, H.; Liao, X
2016-01-01
method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method...... of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.......This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration...
Hybrid biasing approaches for global variance reduction
Wu, Zeyun; Abdel-Khalik, Hany S.
2013-01-01
A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.
Schubert, Peter; Culibrk, Brankica; Karwal, Simrath; Serrano, Katherine; Levin, Elena; Bu, Daniel; Bhakta, Varsha; Sheffield, William P; Goodrich, Raymond P; Devine, Dana V
2015-04-01
Pathogen inactivation (PI) technologies are currently licensed for use with platelet (PLT) and plasma components. Treatment of whole blood (WB) would be of benefit to the blood banking community by saving time and costs compared to individual component treatment. However, no paired, pool-and-split study directly assessing the impact of WB PI on the subsequently produced components has yet been reported. In a "pool-and-split" study, WB either was treated with riboflavin and ultraviolet (UV) light or was kept untreated as control. The buffy coat (BC) method produced plasma, PLT, and red blood cell (RBC) components. PLT units arising from the untreated WB study arm were treated with riboflavin and UV light on day of production and compared to PLT concentrates (PCs) produced from the treated WB units. A panel of common in vitro variables for the three types of components was used to monitor quality throughout their respective storage periods. PCs derived from the WB PI treatment were of significantly better quality than treated PLT components for most variables. RBCs produced from the WB treatment deteriorated earlier during storage than untreated units. Plasma components showed a 3% to 44% loss in activity for several clotting factors. Treatment of WB with riboflavin and UV before production of components by the BC method shows a negative impact on all three blood components. PLT units produced from PI-treated WB exhibited less damage compared to PLT component treatment. © 2014 AABB.
Effects of nitrogen application method and weed control on corn yield and yield components.
Sepahvand, Pariya; Sajedi, Nurali; Mousavi, Seyed Karim; Ghiasvand, Mohsen
2014-04-01
The effects of nitrogen fertilizer application and different methods for weed control on yield and yield components of corn was evaluated in Khorramabad in 2011. The experiment was conducted as a split plot based on randomized complete block design in 3 replications. Nitrogen application was as main plot in 4 levels (no nitrogen, broadcasting nitrogen, banding nitrogen and sprayed nitrogen) and methods of weed control were in 4 levels (non-control weeds, application Equip herbicide, once hand control of weeds and application Equip herbicide+once time weeding) was as subplots. Result illustrated that effects of nitrogen fertilizer application were significant on grain and forage yield, 100 seeds weight, harvest index, grain number per row and cob weight per plant. Grain yield increased by 91.4 and 3.9% in application banding and broadcasting for nitrogen fertilizer, respectively, compared to the no fertilizer treatment. The results show improved efficiency of nitrogen utilization by banding application. Grain yield, harvest index, seed rows per cob, seeds per row and cob weight were increased by weed control. In the application of Equip herbicide+ hand weeding treatment corn grain yield was increased 126% in comparison to weedy control. It represents of the intense affects of weed competition with corn. The highest corn grain yield (6758 kg h(-1)) was related to the application banding of nitrogen fertilizer and Equip herbicide+once hand weeding.
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Meuschke, R.E.; Wolfe, D.L.
1982-01-01
This invention relates to an apparatus and a method for cutting, within a shielding confinement, the irradiated components of a nuclear steam generator to reduce such components to a size to permit their subsequent removal from the containment structure of the generator
Variance squeezing and entanglement of the XX central spin model
El-Orany, Faisal A A; Abdalla, M Sebawe
2011-01-01
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Variance squeezing and entanglement of the XX central spin model
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
A finite element method based microwave heat transfer modeling of frozen multi-component foods
Pitchai, Krishnamoorthy
Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a
A Ardakani
2016-12-01
flowering (25P+25V+50F, and 25% at planting+50% at vegetative stage and 25%at early boll development (25P+50V+25B as the subplot. The seeds planted had been acid-delinted and treated with chemicals against seed and seedling diseases. Plots consisted of six rows spaced 0.5 min row and 0.2 m in plant (10 plants m-2 and 6 m in length. To evaluate yield components of cotton including plant height, sympodial branch number, boll number, boll weight, 10 individual plants were selected randomly from final harvest area. At harvesting time one meter square from the beginning and a half meter around each plot was removed as a marginal effect. The remaining area was harvested by hand for determine of lint and biological yield. Seed-cotton samples were ginned to separate the fiber (lint from the seed. Lint percentage (% was calculated as the weight of lint to weight of the seed-cotton. The statistical analyses were performed by SAS software Ver. 9.1. The mean separation was done through Fischer least significant difference (FLSD test at alpha 0.05. Results and Discussion Analysis of variance showed that boll weight, seed cotton yield and biological yield were significantly affected by potassium rate, whereas plant height, number of sympodial branch, boll number and lint percentage was not affected by potassium rate. All traits were affected by potassium application method except plant height and lint percentage. Plant height, boll weight, seed cotton yield and lint percentage were affected by interaction of potassium rate and application method. Increasing of K level up to 150 kg ha-1 increased boll weight (23.64%, seed cotton yield (17.67%, and biological yield (9.86% in comparison with the application of 75 kg ha-1. Plant height, sympodial branch number and lint percentage did not respond to K rate. K application as 25% at planting+25% at vegetative stage (5-8 leaves stage, 25% at first flowering and 25% at early boll development (25P+25V+25F+25B had the highest boll weight, seed
Robust estimation of the noise variance from background MR data
Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.
2006-01-01
In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2016-11-01
The analytical energy gradient and Hessian of the two-component Normalized Elimination of the Small Component (2c-NESC) method with regard to the components of the electric field are derived and used to calculate spin-orbit coupling (SOC) corrected dipole moments and dipole polarizabilities of molecules, which contain elements with high atomic number. Calculated 2c-NESC dipole moments and isotropic polarizabilities agree well with the corresponding four-component-Dirac Hartree-Fock or density functional theory values. SOC corrections for the electrical properties are in general small, but become relevant for the accurate prediction of these properties when the molecules in question contain sixth and/or seventh period elements (e.g., the SO effect for At2 is about 10% of the 2c-NESC polarizability). The 2c-NESC changes in the electric molecular properties are rationalized in terms of spin-orbit splitting and SOC-induced mixing of frontier orbitals with the same j = l + s quantum numbers.
Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter
2016-11-14
The analytical energy gradient and Hessian of the two-component Normalized Elimination of the Small Component (2c-NESC) method with regard to the components of the electric field are derived and used to calculate spin-orbit coupling (SOC) corrected dipole moments and dipole polarizabilities of molecules, which contain elements with high atomic number. Calculated 2c-NESC dipole moments and isotropic polarizabilities agree well with the corresponding four-component-Dirac Hartree-Fock or density functional theory values. SOC corrections for the electrical properties are in general small, but become relevant for the accurate prediction of these properties when the molecules in question contain sixth and/or seventh period elements (e.g., the SO effect for At 2 is about 10% of the 2c-NESC polarizability). The 2c-NESC changes in the electric molecular properties are rationalized in terms of spin-orbit splitting and SOC-induced mixing of frontier orbitals with the same j = l + s quantum numbers.
Allowable variance set on left ventricular function parameter
Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin
2010-01-01
Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)
Minimum variance Monte Carlo importance sampling with parametric dependence
Ragheb, M.M.H.; Halton, J.; Maynard, C.W.
1981-01-01
An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de
Variance Function Partially Linear Single-Index Models1.
Lian, Heng; Liang, Hua; Carroll, Raymond J
2015-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.
Evolution of Genetic Variance during Adaptive Radiation.
Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel
2018-04-01
Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.
Variance-based sensitivity indices for models with dependent inputs
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
Power Estimation in Multivariate Analysis of Variance
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Durocher, A.; Vignal, N.; Escourbiac, F.; Farjon, J.L.; Schlosser, J. [CEA Cadarache, Dept. de Recherches sur la Fusion Controlee, 13 - Saint-Paul-lez-Durance (France); Cismondi, F. [Toulon Univ., 83 - La Garde (France)
2004-07-01
Among all Non-Destructive Examinations (NDE), active infrared thermography is becoming recognised as a technique available today for improving quality control of many materials and structures involved in heat transfer. The infrared thermography allows to characterise the bond between two materials having different thermal physical properties. In order to increase the defect detection limit of the SATIR test bed, several possibilities have been evaluated to improve the infrared thermography inspection. The implementation in 2003 of a micro-bolometer camera and the improving of the thermo-signal process allowed to increase considerably the detection sensitivity of the SATIR facility. The quality, the spatial stability of infrared image and the detection of edge defect have been also improved. The coupling on the same test bed of SATIR method with a lock-in thermography will be evaluated in this paper. An improvement of the global reliability is expected by data merging produced by the two thermal excitation sources. A new enhanced facility named SATIRPACA has been designed for the full Non Destructive Examination of the High Heat Flux ITER components taking into account these main improvements. These systematic acceptance tests obviously need tools for quality control of critical parts. (authors)
A method for the purification of natural gas from acidic components
Grinman, B.Kh.
1981-01-01
In the method of purifying natural gas from acidic components by means of injecting it into a layer of natural absorbers for the purpose of increasing the level of recovering CO/sub 2/, a water bearing terrigenic layer, containing silicates, carbonates, and sulfates of alkaline earth metals and layer water with a pH of 7.0 to 9.0 are used. Example. Specimens of rock from the Shatlyk deposit, saturated with layer water with a general mineralization of 19.8% grams per liter and natural gas from the Northern Dengiskul deposit with a CO/sub 2/ content of 3.2 volumetric percentage were used. The installation with the core specimens was placed in a thermostat, and was blown through with helium until complete air pressure was attained, and afterwards, 14.1 liters of natural gas were supplied. The initial pressure was 40.1 kilogauss per square centimeter, the test temperature was 50/sup 0/, and the duration of the experiment was 20 days. After finishing the test, the amount of CO/sub 2/ left in the gas was determined and the amount of the CO/sub 2/ absorbed by rock was also determined. The amount of CO/sub 2/, which entered the reaction with rock was 60% of the general amount of CO/sub 2/ in natural gas. The amount of CO/sub 2/ absorbed by the rock was 10%.
Olivecrona, H.
2003-01-01
Purpose: 3D detection of centerpoints of prosthetic cup and head after total hip arthroplasty (THA) using CT. Material and Methods: Two CT examinations, 10 min apart, were obtained from each of 10 patients after THA. Two independent examiners placed landmarks in images of the prosthetic cup and head. All landmarking was repeated after 1 week. Centerpoints were calculated and compared. Results: Within volumes, all measurements of centerpoints of cup and head fell, with a 95% confidence, within one CT-voxel of any other measurement of the same object. Across two volumes, the mean error of distance between center of cup and prosthetic head was 1.4 mm (SD 0.73). Intra- and interobserver 95% accuracy limit was below 2 mm within and below 3 mm across volumes. No difference between intra- and interobserver measurements occurred. A formula for converting finite sets of point landmarks in the radiolucent tread of the cup to a centerpoint was stable. The percent difference of the landmark distances from a calculated spherical surface was within one CT-voxel. This data was normally distributed and not dependent on observer or trial. Conclusion: The true 3D position of the centers of cup and prosthetic head can be detected using CT. Spatial relationship between the components can be analyzed visually and numerically
Olivecrona, H. [Soedersjukhuset, Stockholm (Sweden). Dept. of Hand Surgery; Weidenhielm, L. [Karolinska Hospital, Stockholm (Sweden). Dept. of Orthopedics; Olivecrona, L. [Karolinska Hospital, Stockholm (Sweden). Dept. of Radiology; Noz, M.E. [New York Univ. School of Medicine, NY (United States). Dept. of Radiology; Maguire, G.Q. [Royal Inst. of Tech., Kista (Sweden). Inst. for Microelectronics and Information Technology; Zeleznik, M. P. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Radiation Oncology; Svensson, L. [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Mathematics; Jonson, T. [Eskadern Foeretagsutveckling AB, Goeteborg (Sweden)
2003-03-01
Purpose: 3D detection of centerpoints of prosthetic cup and head after total hip arthroplasty (THA) using CT. Material and Methods: Two CT examinations, 10 min apart, were obtained from each of 10 patients after THA. Two independent examiners placed landmarks in images of the prosthetic cup and head. All landmarking was repeated after 1 week. Centerpoints were calculated and compared. Results: Within volumes, all measurements of centerpoints of cup and head fell, with a 95% confidence, within one CT-voxel of any other measurement of the same object. Across two volumes, the mean error of distance between center of cup and prosthetic head was 1.4 mm (SD 0.73). Intra- and interobserver 95% accuracy limit was below 2 mm within and below 3 mm across volumes. No difference between intra- and interobserver measurements occurred. A formula for converting finite sets of point landmarks in the radiolucent tread of the cup to a centerpoint was stable. The percent difference of the landmark distances from a calculated spherical surface was within one CT-voxel. This data was normally distributed and not dependent on observer or trial. Conclusion: The true 3D position of the centers of cup and prosthetic head can be detected using CT. Spatial relationship between the components can be analyzed visually and numerically.
Someya, Harushi; Mori, Yuichi; Abe, Masahiro; Machida, Isamu; Hasegawa, Atsushi; Yoshie, Osamu
Due to the deregulation of financial industry, the branches in banking industry need to shift to the sales-oriented bases from the operation-oriented bases. For corresponding to this movement, new banking branch systems are being developed. It is the main characteristics of new systems that we bring the form operations that have traditionally been performed at each branch into the centralized operation center for the purpose of rationalization and efficiency of the form operations. The branches treat a wide variety of forms. The forms can be described by common items in many cases, but the items include the different business logic and each form has the different relation among the items. And there is a need to develop the client application by user oneself. Consequently the challenge is to arrange the development environment that is high reusable, easy customizable and user developable. We propose a client application architecture that has a loosely coupled component connection method, and allows developing the applications by only describing the screen configurations and their transitions in XML documents. By adopting our architecture, we developed client applications of the centralized operation center for the latest banking branch system. Our experiments demonstrate good performances.
Durocher, A.; Vignal, N.; Escourbiac, F.; Farjon, J.L.; Schlosser, J.; Cismondi, F.
2004-01-01
Among all Non-Destructive Examinations (NDE), active infrared thermography is becoming recognised as a technique available today for improving quality control of many materials and structures involved in heat transfer. The infrared thermography allows to characterise the bond between two materials having different thermal physical properties. In order to increase the defect detection limit of the SATIR test bed, several possibilities have been evaluated to improve the infrared thermography inspection. The implementation in 2003 of a micro-bolometer camera and the improving of the thermo-signal process allowed to increase considerably the detection sensitivity of the SATIR facility. The quality, the spatial stability of infrared image and the detection of edge defect have been also improved. The coupling on the same test bed of SATIR method with a lock-in thermography will be evaluated in this paper. An improvement of the global reliability is expected by data merging produced by the two thermal excitation sources. A new enhanced facility named SATIRPACA has been designed for the full Non Destructive Examination of the High Heat Flux ITER components taking into account these main improvements. These systematic acceptance tests obviously need tools for quality control of critical parts. (authors)
Methods and means of the radioisotope flaw detection of the nuclear power reactors components
Dekopov, A.S.; Majorov, A.N.; Firsov, V.G.
1979-01-01
Methods and means are considered for the radioisotopic flaw detection of the nuclear reactors pressure vessels and structural components of the reactor circuit. Methods of control are described as in the technological process of fabrication of the power reactors assemblies as during the systematic-preventive repair of the nuclear power station equipment during exploitation. Methodological base is given of the technology of radiation control of welded joints of the pressure vessel branch piper of the WWER-440 and WWER-1000 reactors in the process of assembling and exploitation and joining pipes with the pipe-plate of the steamgenerator in the process of fabrication. Methods of the radioisotope flaw detection in the process of exploitation take into consideration the influence of the radioisotope background, and ensure obtaining of the demanded by the rules of control, sensitivity. Methods of control of welded joints of the steamgenerator of nuclear power plants are based on the simultaneous examination of all joints with application of the shaped radiographic plate-holders. Special gamma-flaw-detection equipment is developed for control of the welded joints of the main branch-pipes. Design peculiarities are given of the installation for flaw detection. These installations are equipped with the system for emergency return of the radiation source into the storage position from the position for exposure. They have automatic exposure-meters for determination of the exposure time. Successfull exploitation of such installations in the Finland during assembling equipment for the nuclear reactor of the nuclear power plant ''Loviisa-1'' and in the USSR on the Novovoronezh nuclear power plant has shown possibility for detection of flaws having dimensions about 1% of the equipment used. For control of welded joints of pipes with pipe-plates at the steam generators, portable flaw-detectors are used. Sensitivity of these flaw-detectors towards detection of the wire standards has
Zerbst, U.; Beeck, F.; Scheider, I.; Brocks, W. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Werkstofforschung
1998-11-01
Under the roof of SINTAP (Structural Integrity Assessment Procedures for European Industry), a European BRITE-EURAM project, a study is being carried out into the possibility of establishing on the basis of existing models a standard European flaw assessment method. The R6 Routine and the ETM are important, existing examples in this context. The paper presents the two methods, explaining their advantages and shortcomes as well as common features. Their applicability is shown by experiments with two pressure vessels subject to internal pressure and flawed by a surface crack or a through-wall crack, respectively. Both the R6 Routine and the ETM results have been compared with results of component tests carried out in the 1980s at TWI and are found to yield acceptable conservative, i.e. sufficiently safe, lifetime predictions, as they do not give lifetime assessments which unduly underestimate the effects of flaws under operational loads. (orig./CB) [Deutsch] Gegenwaertig wird im Rahmen von SINTAP (Structural Integrity Assessment Procedures for European Industries), einem europaeischen BRITE-EURAM-Projekt geprueft, inwieweit auf der Grundlage vorhandener Modelle eine einheitliche europaeische Fehlerbewertungsmethode erstellt werden kann. Eine zentrale Stellung kommt dabei Verfahren wie der R6-Routine und dem ETM zu. In der vorliegenden Arbeit wurden beide Methoden vorgestellt, wobei ihre Vor- und Nachteile, aber auch ihre Gemeinsamkeiten herausgearbeitet wurden. Die Anwendung wurde an zwei innendruckbelasteten Behaeltern mit Oberflaechen- bzw. wanddurchdringendem Riss demonstriert. Sowohl R6-Routine als auch ETM ergaben im Vergleich mit am TWI zu Beginn der 80er Jahre durchgefuehrten Bauteilexperimenten eine vertretbare konservative Vorhersage, d.h. eine nicht allzu grosse Unterschaetzung der ertragbaren Last der Bauteile. (orig.)
Powder Injection Molding - An innovative manufacturing method for He-cooled DEMO divertor components
Antusch, Steffen; Norajitra, Prachai; Piotter, Volker; Ritzhaupt-Kleissl, Hans-Joachim; Spatafora, Luigi
2011-01-01
At Karlsruhe Institute of Technology (KIT), a He-cooled divertor design for future fusion power plants has been developed. This concept is based on the use of modular cooling fingers made from tungsten and tungsten alloy, which are presently considered the most promising divertor materials to withstand the specific heat load of 10 MW/m 2 . Since a large number of the finger modules (n > 250,000) are needed for the whole reactor, developing a mass-oriented manufacturing method is indispensable. In this regard, an innovative manufacturing technology, Powder Injection Molding (PIM), has been adapted to W processing at KIT since a couple of years. This production method is deemed promising in view of large-scale production of tungsten parts with high near-net-shape precision, hence, offering an advantage of cost-saving process compared to conventional machining. The complete technological PIM process for tungsten materials and its application on manufacturing of real divertor components, including the design of a new PIM tool is outlined and, results of the examination of the finished product after heat-treatment are discussed. A binary tungsten powder feedstock with a solid load of 50 vol.% was developed and successfully tested in molding experiments. After design, simulation and manufacturing of a new PIM tool, real divertor parts are produced. After heat-treatment (pre-sintering and HIP) the successful finished samples showed a sintered density of approximately 99%, a hardness of 457 HV0.1, a grain size of approximately 5 μm and a microstructure without cracks and porosity.
Bryan, B.J.; Flanders, H.E. Jr.
1992-01-01
Seismic qualification of Class I nuclear components is accomplished using a variety of analytical methods. This paper compares the results of time history dynamic analyses of a heat exchanger support structure using response spectrum and time history direct integration analysis methods. Dynamic analysis is performed on the detailed component models using the two methods. A nonlinear elastic model is used for both the response spectrum and direct integration methods. A nonlinear model which includes friction and nonlinear springs, is analyzed using time history input by direct integration. The loads from the three cases are compared
D. Sarsri
2016-03-01
Full Text Available This paper presents a methodological approach to compute the stochastic eigenmodes of large FE models with parameter uncertainties based on coupling of second order perturbation method and component mode synthesis methods. Various component mode synthesis methods are used to optimally reduce the size of the model. The statistical first two moments of dynamic response of the reduced system are obtained by the second order perturbation method. Numerical results illustrating the accuracy and efficiency of the proposed coupled methodological procedures for large FE models with uncertain parameters are presented.
Effect of Different Tillage Methods and Cover Crop Types on Yield and Yield Components of Wheat
Z Sharefee
2018-05-01
Full Text Available Introduction Conservation agriculture is an appropriate strategy for maintaining and improving agricultural resources which increases crop production and stability and also provides environmental protection. This attitude contributes to the conservation of natural resources (soil, water, and air and is one of the most effective ways to overcome the drought crisis, water management and compensation of soil organic matter in arid and semi-arid regions. The practice of zero-tillage decreases the mineralization of organic matter and contributes to the sequestration of organic carbon in the soil. Higher amounts of organic matter in the soil improve soil structure and root growth, water infiltration and retention, and cation exchange capacity. In addition, zero-tillage reduces soil compaction and crop production costs. Cover crops are cultivated to protect the soil from erosion and elements loss by leaching or runoff and also improve the soil moisture and temperature. Given that South Khorasan farmers still use traditional methods of cultivation of wheat, and cover crops have no place in their farming systems, the aim of this study was to investigate the effect of cover crops types and tillage systems on yield and yield components of wheat in Birjand region. Materials and Methods A split plot field experiment was conducted based on randomized complete block design with three replications at the Research Farm of the University of Birjand over the growing season of 2014-2015. The main factor was the type of tillage (no-till, reduced tillage and conventional tillage and cover crop type (chickling pea (Lathyrus sativus, rocket salad (Eruca sativa, triticale (X Triticosecale witmack, barley (Hordeum vulgaris and control (no cover crop was considered as sub plots. Cover crops were planted on July 2014. Before planting wheat, cover crops were dried through spraying paraquat herbicide using a backpack sprayer at a rate of 3 L ha-1. Then the three tillage
Spot Variance Path Estimation and its Application to High Frequency Jump Testing
Bos, C.S.; Janus, P.; Koopman, S.J.
2012-01-01
This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to
Influence of Family Structure on Variance Decomposition
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Efficient Cardinality/Mean-Variance Portfolios
Brito, R. Pedro; Vicente, Luís Nunes
2014-01-01
International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...
Study by the disco method of critical components of a P.W.R. normal feedwater system
Duchemin, B.; Villeneuve, M.J. de; Vallette, F.; Bruna, J.G.
1983-03-01
The DISCO (Determination of Importance Sensitivity of COmponents) method objectif is to rank the components of a system in order to obtain the most important ones versus availability. This method uses the fault tree description of the system and the cut set technique. It ranks the components by ordering the importances attributed to each one. The DISCO method was applied to the study of the 900 MWe P.W.R. normal feedwater system with insufficient flow in steam generator. In order to take account of operating experience several data banks were used and the results compared. This study allowed to determine the most critical component (the turbo-pumps) and to propose and quantify modifications of the system in order to improve its availability
Multiscale principal component analysis
Akinduko, A A; Gorban, A N
2014-01-01
Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis
No-migration variance petition
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program
Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.
Zapko-Willmes, Alexandra; Kandler, Christian
2018-01-01
The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.
Noh, J. M.; Yoo, J. W.; Joo, H. K.
2004-01-01
In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Stahlberg, R.
1977-01-01
On the basis of fracture mechanics calculations and experimental investigations, it is shown how cracks of different shape and location behave under given static and cyclic loads. In particular, component safety with regard to spontaneous failure and crack growth behaviour in different components are discussed. [de
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Ma, Hui-qiang
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
Baraich, A.A.K.; Gandahi, A.W.
2016-01-01
The sunflower (Helianthus annuus L.) has been recognized as a crop with high potentials that can successfully meet future oil requirements of the country. Formulation of micronutrients (MN) based fertilizer, in terms of application rate and method, and uptake of MN by sunflower has the ability not only to ensure nutrients availability to plants particularly in MN-limiting environments but also can manipulate the environmental hazards associated with over inorganic fertilization. To support this view, clear experimental evidence is still lacking. In addition, the current experiments aimed to evaluate the influence of MN and its method of application on yield and yield components of sunflower cultivars/hybrids. Three sunflower cultivars (HO-1, Hysun-39 and Ausigold-62) along with three MN (Z, B and Fe) and two application methods (soil and foliar) were used in the experiment. Three Zn application rate (3, 5 and 8 kg ha-1) along with 0.75 kg ha-1 B and 0.30 kg Fe ha-1 were used in four combinations such as 0-0-0, 0-0.75-0.30, 0-0.75-0.30, 3-0.75-0.30, 5-0.75-0.30, 8-0.75-0.30 kg Z, B and Fe ha-1, respectively. A control (no MN) treatment was also included for comparison. Two year averaged study exhibited that foliar application of Zn, B and Fe at rate of 8-0.30-0.75 kg ha-1 increased stem girth, head diameter, number of seeds head-1, seed weight head-1, seed index, oil content and seed yield by 21%, 27%, 13%, 34%, 19%, 24 and 31%, respectively over control. Among cultivars/hybrids, the hybrids HO-1 and Hysun-39 had taller plants, seed weight head-1, seeds head-1 and earlier in flowering and maturity. Flowering and maturity was delayed in Ausi Gold-62 with higher seed index and oil content. It is concluded that foliar application of micronutrients at the rate of 8+0.75+0.30 Zn, B and Fe kg ha-1 had substantially improved yield and yield related traits of sunflower cultivars HO-1, Hysun-39 and Ausi gold-62. (author)
Riedell, James A. (Inventor); Easler, Timothy E. (Inventor)
2009-01-01
A precursor of a ceramic adhesive suitable for use in a vacuum, thermal, and microgravity environment. The precursor of the ceramic adhesive includes a silicon-based, preceramic polymer and at least one ceramic powder selected from the group consisting of aluminum oxide, aluminum nitride, boron carbide, boron oxide, boron nitride, hafnium boride, hafnium carbide, hafnium oxide, lithium aluminate, molybdenum silicide, niobium carbide, niobium nitride, silicon boride, silicon carbide, silicon oxide, silicon nitride, tin oxide, tantalum boride, tantalum carbide, tantalum oxide, tantalum nitride, titanium boride, titanium carbide, titanium oxide, titanium nitride, yttrium oxide, zirconium diboride, zirconium carbide, zirconium oxide, and zirconium silicate. Methods of forming the ceramic adhesive and of repairing a substrate in a vacuum and microgravity environment are also disclosed, as is a substrate repaired with the ceramic adhesive.
R. Amini
2017-08-01
Full Text Available Introduction: Corn (Zea mays L. is cultivated widely throughout the world and has the highest production among the cerealsafter rice and wheat. In Iran the total production of corn in 2013 was more than 2540000 tons. Weeds are one of the greatest limiting factors to decrease corn yield in Iran as the average yield loss due to weeds in the fields of Kermanshah in 2009 was 17.32 %. The herbicides are the main weed control method in conventional cropping systems but their application has been increased herbicide resistant weeds and environmental pollution. Integrated weed management combines all applicable including chemical and non-chemical methods to reduce the effect of weeds in the cropping systems. Thus, Weed control strategies such as tillage, mulch, cover crops and intercropping could be used for integrated weed management of corn. Previous studies showed that crop residues such as rye (Secale sereal L., wheat (Triticum aestivum L., barley (Hordeum vulgare L. and clover (Trifolium sp., cover crops and living mulch could inhibit weed germination and growth. Therefore the objective of this study was evaluating the effects of some integrated weed management treatments on weed characteristics, yield components and grain yield of corn. Materials and methods: In order to evaluate the effect of some weed management treatments on corn (Zea mays L. yield an experiment was conducted in 2014 in Ravansar, Kermanshah, Iran. This study was arranged based on randomized complete block design with 10 treatments and three replications. The weed management treatments were including 1-chemical control followed by mechanical control (application of nicosulfuron at a dose of 80 g.a.i.ha-1 + cultivator 40 days after emergence 2- chemical control followed by mechanical control (application of 2,4-D+MCPA at a dose of 675 g.a.i.ha-1 + cultivator 40 days after emergence 3- cultural control followed by mechanical control (planting hairy vetch (Vicia villosa in the fall
Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier
2016-01-01
The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic ...
Starting design for use in variance exchange algorithms | Iwundu ...
A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...
Ridluan, Artit; Tokuhiro, Akira; Manic, Milos; Patterson, Michael; Danchus, William
2009-01-01
In order to meet the global energy demand and also mitigate climate change, we anticipate a significant resurgence of nuclear power in the next 50 years. Globally, Generation III plants (ABWR) have been built; Gen' III+ plants (EPR, AP1000 others) are anticipated in the near term. The U.S. DOE and Japan are respectively pursuing the NGNP and MSFR. There is renewed interest in closing the fuel cycle and gradually introducing the fast reactor into the LWR-dominated global fleet. In order to meet Generation IV criteria, i.e. thermal efficiency, inherent safety, proliferation resistance and economic competitiveness, plant and energy conversion system engineering design have to increasingly meet strict design criteria with reduced margin for reliable safety and uncertainties. Here, we considered a design optimization approach using an anticipated NGNP thermal system component as a Case Study. A systematic, efficient methodology is needed to reduce time consuming trial-and-error and computationally-intensive analyses. We thus developed a design optimization method linking three elements; that is, benchmarked CFD used as a 'design tool', artificial neural networks (ANN) to accommodate non-linear system behavior and enhancement of the 'design space', and finally, response surface methodology (RSM) to optimize the design solution with targeted constraints. The paper presents the methodology including guiding principles, an integration of CFD into design theory and practice, consideration of system non-linearities (such as fluctuating operating conditions) and systematic enhancement of the design space via application of ANN, and a stochastic optimization approach (RSM) with targeted constraints. Results from a Case Study optimizing the printed circuit heat exchanger for the NGNP energy conversion system will be presented. (author)
Turbine component having surface cooling channels and method of forming same
Miranda, Carlos Miguel; Trimmer, Andrew Lee; Kottilingam, Srikanth Chandrudu
2017-09-05
A component for a turbine engine includes a substrate that includes a first surface, and an insert coupled to the substrate proximate the substrate first surface. The component also includes a channel. The channel is defined by a first channel wall formed in the substrate and a second channel wall formed by at least one coating disposed on the substrate first surface. The component further includes an inlet opening defined in flow communication with the channel. The inlet opening is defined by a first inlet wall formed in the substrate and a second inlet wall defined by the insert.
Turbomachine combustor nozzle including a monolithic nozzle component and method of forming the same
Stoia, Lucas John; Melton, Patrick Benedict; Johnson, Thomas Edward; Stevenson, Christian Xavier; Vanselow, John Drake; Westmoreland, James Harold
2016-02-23
A turbomachine combustor nozzle includes a monolithic nozzle component having a plate element and a plurality of nozzle elements. Each of the plurality of nozzle elements includes a first end extending from the plate element to a second end. The plate element and plurality of nozzle elements are formed as a unitary component. A plate member is joined with the nozzle component. The plate member includes an outer edge that defines first and second surfaces and a plurality of openings extending between the first and second surfaces. The plurality of openings are configured and disposed to register with and receive the second end of corresponding ones of the plurality of nozzle elements.
Bright, Molly G.; Murphy, Kevin
2015-01-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...
Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M
2017-05-01
Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.
Electronic spectra of DyF studied by four-component relativistic configuration interaction methods
Yamamoto, Shigeyoshi, E-mail: syamamot@lets.chukyo-u.ac.jp [School of International Liberal Studies, Chukyo University, 101-2 Yagoto-Honmachi, Showa-ku, Nagoya 466-8666 (Japan); Tatewaki, Hiroshi [Institute of Advanced Studies in Artificial Intelligence, Chukyo University, Toyota 470-0393 (Japan); Graduate School of Natural Sciences, Nagoya City University, Aichi 467-8501 (Japan)
2015-03-07
The electronic states of the DyF molecule below 3.0 eV are studied using 4-component relativistic CI methods. Spinors generated by the average-of-configuration Hartree-Fock method with the Dirac-Coulomb Hamiltonian were used in CI calculations by the KRCI (Kramers-restricted configuration interaction) program. The CI reference space was generated by distributing 11 electrons among the 11 Kramers pairs composed mainly of Dy [4f], [6s], [6p] atomic spinors, and double excitations are allowed from this space to the virtual molecular spinors. The CI calculations indicate that the ground state has the dominant configuration (4f{sup 9})(6s{sup 2})(Ω = 7.5). Above this ground state, 4 low-lying excited states (Ω = 8.5, 7.5, 7.5, 7.5) are found with dominant configurations (4f{sup 10})(6s). These results are consistent with the experimental studies of McCarthy et al. Above these 5 states, 2 states were observed at T{sub 0} = 2.39 eV, 2.52 eV by McCarthy et al. and were named as [19.3]8.5 and [20.3]8.5. McCarthy et al. proposed that both states have dominant configurations (4f{sup 9})(6s)(6p), but these configurations are not consistent with the large R{sub e}’s (∼3.9 a.u.) estimated from the observed rotational constants. The present CI calculations provide near-degenerate states of (4f{sup 10})(6p{sub 3/2,1/2}), (4f{sup 10})(6p{sub 3/2,3/2}), and (4f{sup 9})(6s)(6p{sub 3/2,1/2}) at around 3 eV. The former two states have larger R{sub e} (3.88 a.u.) than the third, so that it is reasonable to assign (4f{sup 10})(6p{sub 3/2,1/2}) to [19.3]8.5 and (4f{sup 10})(6p{sub 3/2,3/2}) to [20.3]8.5.
Touma, M.; Rajab, A.; Seuleiman, M.
2007-01-01
New chromatographic method for Quantitative Determination of Omeprazole in its Pharmaceutical Products was produced. Omeprazole and its degradation components were well separated in same chromatogram by using high perfume liquid chromatography (HPLC). The new analytical method has been validated by these characteristic tests (accuracy, precision, range, linearity, specificity/selectivity, limit of detection (LOD) and limit of quantitative (LOQ) ).(author)
Touma, M.; Rajab, A.
2009-01-01
New chromatographic method was found for Quantitative Determination of Lansoprazole in its pharmaceutical products. Lansoprazole and its degradation components were well separated in same chromatogram by using high perfume liquid chromatography (HPLC). The new analytical method has been validated by these characteristic tests (accuracy, precision, range, linearity, specificity/selectivity, limit of detection (LOD) and limit of quantitative (LOQ)). (author)
Qiong Wu
2016-04-01
Full Text Available The deformation of aeronautical monolithic components due to CNC machining is a bottle-neck issue in the aviation industry. The residual stress releases and redistributes in the process of material removal, and the distortion of the monolithic component is generated. The traditional one-side machining method will produce oversize deformation. Based on the three-stage CNC machining method, the quasi-symmetric machining method is developed in this study to reduce deformation by symmetry material removal using the M-symmetry distribution law of residual stress. The mechanism of milling deformation due to residual stress is investigated. A deformation experiment was conducted using traditional one-side machining method and quasi-symmetric machining method to compare with finite element method (FEM. The deformation parameters are validated by comparative results. Most of the errors are within 10%. The reason for these errors is determined to improve the reliability of the method. Moreover, the maximum deformation value of using quasi-symmetric machining method is within 20% of that of using the traditional one-side machining method. This result shows the quasi-symmetric machining method is effective in reducing deformation caused by residual stress. Thus, this research introduces an effective method for reducing the deformation of monolithic thin-walled components in the CNC milling process.
Expected Stock Returns and Variance Risk Premia
Bollerslev, Tim; Zhou, Hao
risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Enomoto, Kunio; Otaka, Masahiro; Kurosawa, Koichi; Saito, Hideyo; Tsujimura, Hiroshi; Tamai, Yasukata; Urashiro, Keiichi; Mochizuki, Masato
1996-09-03
The present invention is applied to a BWR type reactor, in which a high speed jetting flow incorporating cavities is collided against the surface of reactor structural components to form residual compression stresses on the surface layer of the reactor structural components thereby improving the stresses on the surface. Namely, a water jetting means is inserted into the reactor container filled with reactor water. Purified water is pressurized by a pump and introduced to the water jetting means. The purified water jetted from the water jetting means and entraining cavities is abutted against the surface of the reactor structural components. With such procedures, since the purified water is introduced to the water jetting means by the pump, the pump is free from contamination of radioactive materials. As a result, maintenance and inspection for the pump can be facilitated. Further, since the purified water injection flow entraining cavities is abutted against the surface of the reactor structural components being in contact with reactor water, residual compression stresses are exerted on the surface of the reactor structural components. As a result, occurrence of stress corrosion crackings of reactor structural components is suppressed. (I.S.)
Enomoto, Kunio; Otaka, Masahiro; Kurosawa, Koichi; Saito, Hideyo; Tsujimura, Hiroshi; Tamai, Yasukata; Urashiro, Keiichi; Mochizuki, Masato.
1996-01-01
The present invention is applied to a BWR type reactor, in which a high speed jetting flow incorporating cavities is collided against the surface of reactor structural components to form residual compression stresses on the surface layer of the reactor structural components thereby improving the stresses on the surface. Namely, a water jetting means is inserted into the reactor container filled with reactor water. Purified water is pressurized by a pump and introduced to the water jetting means. The purified water jetted from the water jetting means and entraining cavities is abutted against the surface of the reactor structural components. With such procedures, since the purified water is introduced to the water jetting means by the pump, the pump is free from contamination of radioactive materials. As a result, maintenance and inspection for the pump can be facilitated. Further, since the purified water injection flow entraining cavities is abutted against the surface of the reactor structural components being in contact with reactor water, residual compression stresses are exerted on the surface of the reactor structural components. As a result, occurrence of stress corrosion crackings of reactor structural components is suppressed. (I.S.)
Aoki, Takayuki; Takagi, Toshiyuki; Kodama, Noriko
2014-01-01
Safety risk importance of components in nuclear power plants has been evaluated based on the probabilistic risk assessment and used for the decisions in various plant managements. But economic risk importance of the components has not been discussed very much. Therefore, this paper discusses risk importance of the components from the viewpoint of plant economic efficiency and proposes a simplified evaluation method of the economic risk importance (or economic maintenance importance). As a result of consideration, the followings were obtained. (1) A unit cost of power generation is selected as a performance indicator and can be related to a failure rate of components in nuclear power plant which is a result of maintenance. (2) The economic maintenance importance has to major factors, i.e. repair cost at component failure and production loss associated with plant outage due to component failure. (3) The developed method enables easy understanding of economic impacts of plant shutdown or power reduction due to component failures on the plane which adopts the repair cost in vertical axis and the production loss in horizontal axis. (author)
Wolfrum, E.A.; Nickel, H.
1977-01-01
The chemical behavior of the surface of pyrocarbon (PyC) coatings of nuclear fuel particles was investigated in aqueous suspension by reaction with oxygen at room temperature. The concentration of the disordered material component, which has a large internal surface, can be identified by means of a pH change. Using this fact, a chemical method was developed that can be used for the quantitative determination of the concentration of this carbon component in the PyC coating
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.
Mispagel, T.; Phlippen, P.W.; Rose, J. [Wissenschaftlich-Technische Ingenieurberatung GmbH (WTI), Juelich (Germany)
2013-07-01
During nuclear power plant operation components and materials are exposed to the neutron flux from the reactor core and radionuclides are produced. After removal of the fuel elements the radioactivity of these radionuclides in the reactor pressure vessel and the core internals provide more than 99% of the activity of the power plant. For the transport, the interim storage and the final disposal of these radioactive components the radioactive inventories have to be decoded with respect to radiation and nuclides. The declaration of the nuclide and activity inventories requires a reliable calculation of neutron induced activation of reactor components. These activation calculations describe the pile-up of nuclides due to irradiation and due to the decay of nuclides. For an optimum usage of the activity capacities of the repository Konrad it is necessary to have a qualified calculation procedure that keeps the conservatism as low as possible.
Zhang, Baixia; He, Shuaibing; Lv, Chenyang; Zhang, Yanling; Wang, Yun
2018-01-01
The identification of bioactive components in traditional Chinese medicine (TCM) is an important part of the TCM material foundation research. Recently, molecular docking technology has been extensively used for the identification of TCM bioactive components. However, target proteins that are used in molecular docking may not be the actual TCM target. For this reason, the bioactive components would likely be omitted or incorrect. To address this problem, this study proposed the GEPSI method that identified the target proteins of TCM based on the similarity of gene expression profiles. The similarity of the gene expression profiles affected by TCM and small molecular drugs was calculated. The pharmacological action of TCM may be similar to that of small molecule drugs that have a high similarity score. Indeed, the target proteins of the small molecule drugs could be considered TCM targets. Thus, we identified the bioactive components of a TCM by molecular docking and verified the reliability of this method by a literature investigation. Using the target proteins that TCM actually affected as targets, the identification of the bioactive components was more accurate. This study provides a fast and effective method for the identification of TCM bioactive components.
Conte, Elio; Khrennikov, Andrei; Federici, Antonio; Zbilut, Joseph P.
2009-01-01
We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.
Spatial analysis based on variance of moving window averages
Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J
2006-01-01
A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...
Triple-component nanocomposite films prepared using a casting method: Its potential in drug delivery
Sadia Gilani
2018-04-01
Full Text Available The purpose of this study was to fabricate a triple-component nanocomposite system consisting of chitosan, polyethylene glycol (PEG, and drug for assessing the application of chitosan–PEG nanocomposites in drug delivery and also to assess the effect of different molecular weights of PEG on nanocomposite characteristics. The casting/solvent evaporation method was used to prepare chitosan–PEG nanocomposite films incorporating piroxicam-β-cyclodextrin. In order to characterize the morphology and structure of nanocomposites, X-ray diffraction technique, scanning electron microscopy, thermogravimetric analysis, and Fourier transmission infrared spectroscopy were used. Drug content uniformity test, swelling studies, water content, erosion studies, dissolution studies, and anti-inflammatory activity were also performed. The permeation studies across rat skin were also performed on nanocomposite films using Franz diffusion cell. The release behavior of films was found to be sensitive to pH and ionic strength of release medium. The maximum swelling ratio and water content was found in HCl buffer pH 1.2 as compared to acetate buffer of pH 4.5 and phosphate buffer pH 7.4. The release rate constants obtained from kinetic modeling and flux values of ex vivo permeation studies showed that release of piroxicam-β-cyclodextrin increased with an increase in concentration of PEG. The formulation F10 containing 75% concentration of PEG showed the highest swelling ratio (3.42±0.02 in HCl buffer pH 1.2, water content (47.89±1.53% in HCl buffer pH 1.2, maximum cumulative drug permeation through rat skin (2405.15±10.97 μg/cm2 in phosphate buffer pH 7.4, and in vitro drug release (35.51±0.26% in sequential pH change mediums, and showed a significantly (p<0.0001 higher anti-inflammatory effect (0.4 cm. It can be concluded from the results that film composition had a particular impact on drug release properties. The different molecular weights of PEG have a
Determination of power system component parameters using nonlinear dead beat estimation method
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are
A pattern recognition approach to transistor array parameter variance
da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.
2018-06-01
The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham
2015-01-01
Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Regression to fuzziness method for estimation of remaining useful life in power plant components
Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.
2014-10-01
Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.
Teaching Principal Components Using Correlations.
Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V
2017-01-01
Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.
Review on characterization methods applied to HTR-fuel element components
Koizlik, K.
1976-02-01
One of the difficulties which on the whole are of no special scientific interest, but which bear a lot of technical problems for the development and production of HTR fuel elements is the proper characterization of the element and its components. Consequently a lot of work has been done during the past years to develop characterization procedures for the fuel, the fuel kernel, the pyrocarbon for the coatings, the matrix and graphite and their components binder and filler. This paper tries to give a status report on characterization procedures which are applied to HTR fuel in KFA and cooperating institutions. (orig.) [de
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Variance in parametric images: direct estimation from parametric projections
Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.
2000-01-01
Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)
One-component plasma dynamical structure factor and the plasma dispersion: Method of moments
Adamjan, S.V.; Tkachenko, I.M.; Meyer, T.
1989-01-01
The molecular dynamics data of Hansen, McDonald and Pollock on the dynamical properties of the classical one-component plasma (OCP) are compared with the results based on an approximation formula for the dielectric function satisfying all known sum rules and exact relations using HNC plasma static properties. (author)
Method for selecting recurrent controls of the tubes and components of the process systems
Oehlin, L.
1987-01-01
The existing rules and recommendations for the inspection of the components of nuclear power plants are considered inadequate. Therefore some new directions have been worked out for suitable distribution of controlling actions. The new concept will cover the probability of fractures and the consequences of accidents. Control procedures will stress safety aspects in particular. (G.B)
Aaron Weiskittel; Jereme Frank; David Walker; Phil Radtke; David Macfarlane; James Westfall
2015-01-01
Prediction of forest biomass and carbon is becoming important issues in the United States. However, estimating forest biomass and carbon is difficult and relies on empirically-derived regression equations. Based on recent findings from a national gap analysis and comprehensive assessment of the USDA Forest Service Forest Inventory and Analysis (USFS-FIA) component...
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
Yunfeng Dong
2017-01-01
Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.
T.C.C. Bittencourt
2002-06-01
Full Text Available Foram estimados parâmetros genéticos, fenotípicos e valores genéticos de pesos padronizados aos 365 (P365 e 455 (P455 dias de idade de animais pertencentes ao programa de melhoramento genético da raça Nelore, desenvolvido pelo Departamento de Genética da USP. Quatro modelos foram utilizados para obter estimativas de parâmetros genéticos REML: o modelo 1 incluiu apenas os efeitos genético direto e residual; o 2, incluiu o efeito de ambiente permanente e os efeitos incluídos no modelo 1; o modelo 3 incluiu o efeito genético materno e os efeitos incluídos no modelo 1; o modelo 4 é o completo, com a inclusão dos efeitos genéticos direto e materno e de ambiente permanente. Para P365, as herdabilidades obtidas foram: 0,48, 0,32, 0,28 e 0,27 para os modelos 1, 2, 3 e 4, respectivamente. Para P455, os valores observados foram: 0,48, 0,38, 0,35 e 0,34 para os modelos 1, 2, 3 e 4, respectivamente. A comparação entre os modelos indicou que os efeitos maternos não foram importantes na variação do P455, mas podem ter alguma importância no peso aos 365 dias de idade.Data from the Genetic Improvement Program of the Nellore Breed of Genetic Department-USP were used to estimate genetic parameters and breeding values for weights at 365 (P365 and 455 (P455 days of age. Four animal models were used to obtain REML estimates of genetic parameters aiming to evaluate the effect of the inclusion of a random maternal genetic effect and a permanent environmental effect on variance component estimates. The model 1 included genetic and residual random effects; model 2 and model 3 were based on model 1 but included permanent environmental (2 and maternal genetic (3 effects; model 4 included genetic, maternal and permanent environmental effects. The heritability estimates for P365 were 0.48, 0.32, 0.28 and 0.27 using models 1, 2, 3 and 4, respectively. For P455, the values were 0.48, 0.38, 0.35 e 0.34 with the same models. The results suggest that
Variance estimation for sensitivity analysis of poverty and inequality measures
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Genung, Mark A; Fox, Jeremy; Williams, Neal M; Kremen, Claire; Ascher, John; Gibbs, Jason; Winfree, Rachael
2017-07-01
The relationship between biodiversity and the stability of ecosystem function is a fundamental question in community ecology, and hundreds of experiments have shown a positive relationship between species richness and the stability of ecosystem function. However, these experiments have rarely accounted for common ecological patterns, most notably skewed species abundance distributions and non-random extinction risks, making it difficult to know whether experimental results can be scaled up to larger, less manipulated systems. In contrast with the prolific body of experimental research, few studies have examined how species richness affects the stability of ecosystem services at more realistic, landscape scales. The paucity of these studies is due in part to a lack of analytical methods that are suitable for the correlative structure of ecological data. A recently developed method, based on the Price equation from evolutionary biology, helps resolve this knowledge gap by partitioning the effect of biodiversity into three components: richness, composition, and abundance. Here, we build on previous work and present the first derivation of the Price equation suitable for analyzing temporal variance of ecosystem services. We applied our new derivation to understand the temporal variance of crop pollination services in two study systems (watermelon and blueberry) in the mid-Atlantic United States. In both systems, but especially in the watermelon system, the stronger driver of temporal variance of ecosystem services was fluctuations in the abundance of common bee species, which were present at nearly all sites regardless of species richness. In contrast, temporal variance of ecosystem services was less affected by differences in species richness, because lost and gained species were rare. Thus, the findings from our more realistic landscapes differ qualitatively from the findings of biodiversity-stability experiments. © 2017 by the Ecological Society of America.
The VIX, the Variance Premium, and Expected Returns
Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo
2018-01-01
. These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...
Jinlu Sheng
2016-07-01
Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
Surface roughness characterization of cast components using 3D optical methods
Nwaogu, Ugochukwu Chibuzoh; Tiedje, Niels Skat; Hansen, Hans Nørgaard
scanning probe image processor (SPIP) software and the results of the surface roughness parameters obtained were subjected to statistical analyses. The bearing area ratio was introduced and applied to the surface roughness analysis. From the results, the surface quality of the standard comparators...... is successfully characterised and it was established that the areal parameters are more informative for sand cast components. The roughness values of the standard visual comparators can serve as a control for the cast components and for order specifications in the foundry industry. A series of iron castings were...... made in green sand moulds and the surface roughness parameter (Sa) values were compared with those of the standards. Sa parameter suffices for the evaluation of casting surface texture. The S series comparators showed a better description of the surface of castings after shot blasting than the A series...
C.S. Chin
2006-01-01
Full Text Available It is important to identify the robustness of product (or embedded component inside the product against shock due to free drop. With the increasing mobile and fast-paced lifestyle of the average consumer, much is required of the products; such as consumers expect mobile products to continue to operate after drop impact. Since free drop test is commonly used to evaluate the robustness of small component embedded in MP3 player, it is difficult to produce a repeatable shock reading due to highly uncontrolled orientation during the impact on ground. Hence attention has been focus on shock table testing, which produces a higher repeatable result. But it failed to demonstrate the actual shock with the presence of rotational movement due to free drop and also it suffers from a similar limitation of repeatability. From drop to drop, shock tables can vary about ± 5% in velocity change but suitable for making a consistent tracking the product improvement.
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining
Qiaokang Liang
2016-11-01
Full Text Available Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.
Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.
Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin
2016-11-16
Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.
Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Wind power converter systems are essential subsystems in both off-shore and on-shore wind turbines. It is the main interface between generator and grid connection. This system is affected by numerous stresses where the main contributors might be defined as vibration and temperature loadings....... The temperature variations induce time-varying stresses and thereby fatigue loads. A probabilistic model is used to model fatigue failure for an electrical component in the power converter system. This model is based on a linear damage accumulation and physics of failure approaches, where a failure criterion...... is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...
Simplified analysis method for vibration of fusion reactor components with magnetic damping
Tanaka, Yoshikazu; Horie, Tomoyoshi; Niho, Tomoya
2000-01-01
This paper describes two simplified analysis methods for the magnetically damped vibration. One is the method modifying the result of finite element uncoupled analysis using the coupling intensity parameter, and the other is the method using the solution and coupled eigenvalues of the single-degree-of-freedom coupled model. To verify these methods, numerical analyses of a plate and a thin cylinder are performed. The comparison between the results of the former method and the finite element tightly coupled analysis show almost satisfactory agreement. The results of the latter method agree very well with the finite element tightly coupled results because of the coupled eigenvalues. Since the vibration with magnetic damping can be evaluated using these methods without finite element coupled analysis, these approximate methods will be practical and useful for the wide range of design analyses taking account of the magnetic damping effect
A novel method for the quantification of key components of manual dexterity after stroke
T?r?metz, Maxime; Colle, Florence; Hamdoun, Sonia; Maier, Marc A.; Lindberg, P?vel G.
2015-01-01
Background A high degree of manual dexterity is a central feature of the human upper limb. A rich interplay of sensory and motor components in the hand and fingers allows for independent control of fingers in terms of timing, kinematics and force. Stroke often leads to impaired hand function and decreased manual dexterity, limiting activities of daily living and impacting quality of life. Clinically, there is a lack of quantitative multi-dimensional measures of manual dexterity. We therefore ...
THE STUDY OF THE CHARACTERIZATION INDICES OF FABRICS BY PRINCIPAL COMPONENT ANALYSIS METHOD
HRISTIAN Liliana; OSTAFE Maria Magdalena; BORDEIANU Demetra Lacramioara; APOSTOL Laura Liliana
2017-01-01
The paper was pursued to prioritize the worsted fabrics type, for the manufacture of outerwear products by characterization indeces of fabrics, using the mathematical model of Principal Component Analysis (PCA). There are a number of variables with a certain influence on the quality of fabrics, but some of these variables are more important than others, so it is useful to identify those variables to a better understanding the factors which can lead the improving of the fabrics quality. A s...
Tools and methods of the formation of the armed violence’ information component
A. V. Bader
2016-10-01
Thus, we can state that the informational component of the armed violence is gradually approaching such theoretically grounded phenomenon as «consistent war» This is a system of outreach and psychological tools, which are aimed at the creation of public awareness, and are conducted using media information, culture, arts and other (psychotropic, psychotronic tools during a long time according to the carefully developed scenarios.
Artem O. Donskikh*
2017-10-01
Full Text Available The paper considers methods of classification of grain mixture components based on spectral analysis in visible and near-infrared wavelength ranges using various measurement approaches - reflection, transmission and combined spectrum methods. It also describes the experimental measuring units used and suggests the prototype of a multispectral grain mixture analyzer. The results of the spectral measurement were processed using neural network based classification algorithms. The probabilities of incorrect recognition for various numbers of spectral parts and combinations of spectral methods were estimated. The paper demonstrates that combined usage of two spectral analysis methods leads to higher classification accuracy and allows for reducing the number of the analyzed spectral parts. A detailed description of the proposed measurement device for high-performance real-time multispectral analysis of the components of grain mixtures is given.
Izabela Dutra Alvim
2013-02-01
Full Text Available Food industry has been developing products to meet the demands of increasing number of consumers who are concerned with their health and who seek food products that satisfy their needs. Therefore, the development of processed foods that contain functional components has become important for this industry. Microencapsulation can be used to reduce the effects of processing on functional components and preserve their bioactivity. The present study investigated the production of lipid microparticles containing phytosterols by spray chilling. The matrices comprised mixtures of stearic acid and hydrogenated vegetable fat, and the ratio of the matrix components to phytosterols was defined by an experimental design using the mean diameters of the microparticles as the response variable. The melting point of the matrices ranged from 44.5 and 53.4 ºC. The process yield was melting point dependent; the particles that exhibited lower melting point had greater losses than those with higher melting point. The microparticles' mean diameters ranged from 13.8 and 32.2 µm and were influenced by the amount of phytosterols and stearic acid. The microparticles exhibited spherical shape and typical polydispersity of atomized products. From a technological and practical (handling, yield, and agglomeration points of view, lipid microparticles with higher melting point proved promising as phytosterol carriers.
A new creep-strain-replica method for evaluating the remaining life time of components
Joas, H.D.
2001-01-01
To realise a safe and economic operation of older power- or chemical plants a strategy for maintenance is necessary, which makes it possible to operate a component or the plant longer than 300,000 operating hours, this also for the situation that the mode of operation has changed meanwhile. In Germany a realistic evaluation of the remaining life-time is done by comparing the actual calculated test data of a component with the code TRD 301 and TRD 508 and additional non-destructive tests or other codes like ASME, Sec. II, BS 5500, AFCEN (1985). According to many boundary conditions, the calculated data are inaccurate and the measuring of creep-strain at temperatures of about 600 o C with capacitive strain-gauges very expensive. Description of the approach of the problems: spotwelding of two gauges to the surface of a component (in a defined distance), forming a gap, producing of replica of the gap after certain operating hours at shut-down conditions by trained personal, evaluation of the replica to gain the amount of creep-strain using a scanning electron microscope, assessment of the creep-strain data. (Author)
Cutting method for structural component into block like shape, and device used for cutting
Nakazawa, Koichi; Ito, Akira; Tateiwa, Masaaki.
1995-01-01
Two grooves each of a predetermined depth are formed along a surface of a structural component, and a portion between the two grooves is cut in the direction of the depth from the surface of the structural component by using a cutting wire of a wire saw device. Then, the cutting wire is moved in the extending direction of the grooves while optionally changing the position in the direction of the depth to conduct cutting for the back face. Further, the cutting wire is moved in the direction of the depth of the groove toward the surface, to cut a portion between the two grooves. The wire saw device comprises a wire saw main body movable along the surface of the structural component, a pair of wire guide portions extending in the direction of the depth, guide pooleys capable of guiding the cutting wire guides revolvably and rotatably disposed at the top end, and an endless annular cutting wire extending between the wire guide portions. Thus, it is possible to continuously cut out blocks set to optional size and thickness. In addition, remote cutting is possible with no requirement for an operator to access to the vicinity of radioactivated portions. (N.H.)
Sampling methods to the statistical control of the production of blood components.
Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo
2017-12-01
The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
A Mean variance analysis of arbitrage portfolios
Fang, Shuhong
2007-03-01
Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.
Dynamic Mean-Variance Asset Allocation
Basak, Suleyman; Chabakauri, Georgy
2009-01-01
Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...
Zou, Wenli; Filatov, Michael; Cremer, Dieter
2011-01-01
The analytical energy gradient of the normalized elimination of the small component (NESC) method is derived for the first time and implemented for the routine calculation of NESC geometries and other first order molecular properties. Essential for the derivation is the correct calculation of the
Karmanov, V.I.
1986-01-01
A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard
Shibui, M.
1989-01-01
A new method for fatigue-life assessment of a component containing defects is presented such that a probabilistic approach is incorporated into the CEGB two-criteria method. The present method assumes that aspect ratio of initial defect, proportional coefficient of fatigue crack growth law and threshold stress intensity range are treated as random variables. Examples are given to illustrate application of the method to the reliability analysis of conduit for an internally cooled cabled superconductor (ICCS) subjected to cyclic quench pressure. The possible failure mode and mechanical properties contributing to the fatigue life of the thin conduit are discussed using analytical and experimental results. 9 refs., 9 figs
Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)
2006-12-18
A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.
Lo Frano, R.; Aquaro, D.; Fontani, E.; Pilo, F.
2014-01-01
Highlights: • Application of PHADEC chemical off-line methodology. • Decontamination of radioactive steam piping components of Caorso turbine building. • Experimental characterization of metallic components, e.g., by SEM analysis. • Measure of the efficiency of treatment by means of the reduction of activity and vs. the treatment time. • Minimization of secondary waste produced during decontamination activity of Caorso BWR plant. - Abstract: The dismantling of nuclear plants is a complex activity that originates often a large quantity of radioactive contaminated residue. In this paper the attention was focused on the PHADEC (PHosphoric Acid DEContamination) plant adopted for the clearance of Caorso NPP (in Italy) metallic systems and components contaminated by Co60 (produced by the neutron capture in the iron materials), like the main steam lines, moisture separator of the turbine buildings, etc. The PHADEC plant consists in a chemical off line treatment: the crud, deposited along the steam piping during life plant as an example, is removed by means of acid attacks in ponds coupled to a high pressure water washing. Due to the fact that the removed contaminated layers, essentially, iron oxides of various chemical composition, depend on components geometry, type of contamination and time of treatment in the PHADEC plant, it becomes of meaningful importance to suggest a procedure capable to improve the control of the PHADEC process parameters. This study aimed thus at the prediction and optimization of the mentioned treatment time in order to improve the efficiency of the plant itself and to achieve, in turn, the minimization of produced wastes. To the purpose an experimental campaign was carried out by analysing several samples, i.e., taken along the main steam piping line. Smear tests as well as metallographic analyses were carried out in order to determine respectively the radioactivity distribution and the crud composition on the inner surface of the
Genetic variants influencing phenotypic variance heterogeneity.
Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa
2018-03-01
Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.
Guerin, P.; Baudron, A. M.; Lautard, J. J. [Commissariat a l' Energie Atomique, DEN/DANS/DM2S/SERMA/LENR, CEA Saclay, 91191 Gif sur Yvette (France)
2006-07-01
This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2006-01-01
This paper describes a new technique for determining the pin power in heterogeneous core calculations. It is based on a domain decomposition with overlapping sub-domains and a component mode synthesis technique for the global flux determination. Local basis functions are used to span a discrete space that allows fundamental global mode approximation through a Galerkin technique. Two approaches are given to obtain these local basis functions: in the first one (Component Mode Synthesis method), the first few spatial eigenfunctions are computed on each sub-domain, using periodic boundary conditions. In the second one (Factorized Component Mode Synthesis method), only the fundamental mode is computed, and we use a factorization principle for the flux in order to replace the higher order Eigenmodes. These different local spatial functions are extended to the global domain by defining them as zero outside the sub-domain. These methods are well-fitted for heterogeneous core calculations because the spatial interface modes are taken into account in the domain decomposition. Although these methods could be applied to higher order angular approximations - particularly easily to a SPN approximation - the numerical results we provide are obtained using a diffusion model. We show the methods' accuracy for reactor cores loaded with UOX and MOX assemblies, for which standard reconstruction techniques are known to perform poorly. Furthermore, we show that our methods are highly and easily parallelizable. (authors)
Li Shu; Zhuo Jiashou; Ren Qingwen
2000-01-01
In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.
Estimating integrated variance in the presence of microstructure noise using linear regression
Holý, Vladimír
2017-07-01
Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.
Application of PHADEC method for the decontamination of radioactive steam piping components
Lo Frano, R.; Pilo, F.; Aquaro, D.
2013-01-01
The dismantling of nuclear plants is a complex activity that originates often a large quantity of radioactive contaminated residue. In this paper the attention was focused on the PHADEC (Phosphoric Acid Decontamination) plant adopted for the clearance of Caorso NPP (in Italy) metallic systems and components contaminated by Co 60 (produced by the neutron capture in the iron materials), like the main steam lines, moisture separator of the turbine buildings, etc.. The PHADEC plant consists in a chemical off line treatment: the crud, deposited along the steam piping during life plant as an example, is removed by means of acid attacks in ponds coupled to a high pressure water washing. Due to the fact that the removed contaminated layers, essentially, iron oxides of various chemical composition, depend on components geometry, type of contamination and time of treatment in the PHADEC plant, it becomes of meaningful importance to suggest a procedure capable to improve the control of the PHADEC process parameters. This study aimed thus at the prediction and optimization of the mentioned treatment time in order to improve the efficiency of the plant itself and to achieve, in turn, the minimization of produced wastes. To the purpose an experimental campaign was carried out by analysing several samples, i.e. taken along the main steam piping line. Smear tests as well as metallographic analyses were carried out in order to determine respectively the radioactivity distribution and the crud composition on the inner surface of the components. Moreover the radioactivity in the crud thickness was measured. These values allowed finally to correlate the residence time in the acid attack ponds to the level of the achieved decontamination. (authors)
Rationale, design and methods of the HEALTHY study nutrition intervention component.
Gillis, B; Mobley, C; Stadler, D D; Hartstein, J; Virus, A; Volpe, S L; El ghormli, L; Staten, M A; Bridgman, J; McCormick, S
2009-08-01
The HEALTHY study was a randomized, controlled, multicenter and middle school-based, multifaceted intervention designed to reduce risk factors for the development of type 2 diabetes. The study randomized 42 middle schools to intervention or control, and followed students from the sixth to the eighth grades. Here we describe the design of the HEALTHY nutrition intervention component that was developed to modify the total school food environment, defined to include the following: federal breakfast, lunch, after school snack and supper programs; a la carte venues, including snack bars and school stores; vending machines; fundraisers; and classroom parties and celebrations. Study staff implemented the intervention using core and toolbox strategies to achieve and maintain the following five intervention goals: (1) lower the average fat content of foods, (2) increase the availability and variety of fruits and vegetables, (3) limit the portion sizes and energy content of dessert and snack foods, (4) eliminate whole and 2% milk and all added sugar beverages, with the exception of low fat or nonfat flavored milk, and limit 100% fruit juice to breakfast in small portions and (5) increase the availability of higher fiber grain-based foods and legumes. Other nutrition intervention component elements were taste tests, cafeteria enhancements, cafeteria line messages and other messages about healthy eating, cafeteria learning laboratory (CLL) activities, twice-yearly training of food service staff, weekly meetings with food service managers, incentives for food service departments, and twice yearly local meetings and three national summits with district food service directors. Strengths of the intervention design were the integration of nutrition with the other HEALTHY intervention components (physical education, behavior change and communications), and the collaboration and rapport between the nutrition intervention study staff members and food service personnel at both school
Pursuing an ecological component for the Effect Factor in LCIA methods
Cosme, Nuno Miguel Dias; Bjørn, Anders; Rosenbaum, Ralph K.
have also been altered by past impacts. Model frameworks are usually built on stability, linearity of causality and expectation of a safe return to stable states if the stressor is minimised. However, the command-and-control paradigm has resulted in the erosion of natural resources and species...... EC50-based) or 1 (assuming that continuous stress affects reproduction rate), but these are all based on biological/physiological responses and do not add a true ecological component to the impact. Such factor simply changes the HC50 by 1 or 0.3 log units. A stressor with equal intensity in two...
Method and alloys for fabricating wrought components for high-temperature gas-cooled reactors
Thompson, L.D.; Johnson, W.R.
1983-01-01
Wrought, nickel-based alloys, suitable for components of a high-temperature gas-cooled reactor exhibit strength and excellent resistance to carburization at elevated temperatures and include aluminum and titanium in amounts and ratios to promote the growth of carburization resistant films while preserving the wrought character of the alloys. These alloys also include substantial amounts of molybdenum and/or tungsten as solid-solution strengtheners. Chromium may be included in concentrations less than 10% to assist in fabrication. Minor amounts of carbon and one or more carbide-forming metals also contribute to high-temperature strength. The range of compositions of these alloys is given. (author)
Revision of the South African flexible pavement design method; mechanistic-empirical components
Theyse, HL
2007-09-01
Full Text Available and damage models or transfer functions. This method was implemented in a number of software packages since the late 1990s which exposed the method to a wide user group. The method therefore came under increasing scrutiny and criticism in the recent past...-intuitive results in some cases, provides unrealistic structural capacity estimates for certain pavement types and does not assess all materials equally, based on their true performance potential. In addition to these problems the method also focuses largely...
Methods and equipment for diagnosis of components of Novovoronezh nuclear power plant
Prokop, K [Energoinvest, Dukovany (Czechoslovakia). Zavod Jaderna Elektrarna
1981-12-01
The results are reported obtained in applying diagnostic techniques adn diagnostic equipment in the Novovoronezh nuclear power plant. Vibroacoustic, neutron and hydrodynamic noiose of the installation was monitored. The test level method and the mean value comparison method were used for assessing the installation condition. Dispersion analysis methods are used for predicting the propagation of anomalies while for determining specific defects leading to the formation of anomalies the method is used based on the correlation analysis of vibroacoustic signals and other technological noise. The flow charts and descriptions are given of the systems of acoustic emission testing, reactor internals testing using neutron noise, pump testing, and the spectral analyzer.
Malec-Czechowska, K.; Stachowicz, W.
2003-01-01
The results of experiments on the detection of irradiated component in commercial flavour blends composed of a mixture of non-irradiated spices, herbs and seasonings are presented. A method based on the thermoluminescence measurements on silicate materials isolated from blends has been adapted. It has been proved that by applying of this technique it is possible to detect 0.95% by weight of paprika, irradiated with a dose of 7 kGy, which was a minor component of non-irradiated flavour blends. (author)
Dewei Tang
2017-03-01
Full Text Available The main task of the third Chinese lunar exploration project is to obtain soil samples that are greater than two meters in length and to acquire bedding information from the surface of the moon. The driving component is the power output unit of the drilling system in the lander; it provides drilling power for core drilling tools. High temperatures can cause the sensors, permanent magnet, gears, and bearings to suffer irreversible damage. In this paper, a thermal analysis model for this driving component, based on the thermal network method (TNM was established and the model was solved using the quasi-Newton method. A vacuum test platform was built and an experimental verification method (EVM was applied to measure the surface temperature of the driving component. Then, the TNM was optimized, based on the principle of heat distribution. Through comparative analyses, the reasonableness of the TNM is validated. Finally, the static temperature field of the driving component was predicted and the “safe working times” of every mode are given.
Shibata, Taiju; Tada, Tatsuya; Sumita, Junya; Sawa, Kazuhiro
2008-01-01
To develop non-destructive evaluation methods for oxidation damage on graphite components in High Temperature Gas-cooled Reactors (HTGRs), the applicability of ultrasonic wave and micro-indentation methods were investigated. Candidate graphites, IG-110 and IG-430, for core components of Very High Temperature Reactor (VHTR) were used in this study. These graphites were oxidized uniformly by air at 500degC. The following results were obtained from this study. (1) Ultrasonic wave velocities with 1 MHz can be expressed empirically by exponential formulas to burn-off, oxidation weight loss. (2) The porous condition of the oxidized graphite could be evaluated with wave propagation analysis with a wave-pore interaction model. It is important to consider the non-uniformity of oxidized porous condition. (3) Micro-indentation method is expected to determine the local oxidation damage. It is necessary to assess the variation of the test data. (author)
Inverse method for stress monitoring in pressure components of steam generators
Duda, P.
2003-01-01
The purpose of this work is to formulate a space marching method, which can be used to solve inverse multidimensional heat conduction problems. The method is designed to reconstruct the transient temperature distribution in a whole construction element based on measured temperatures taken at selected points inside or on the outer surface of the construction element. Next, the Finite Element Method is used to calculate thermal stresses and stresses caused by other loads such as, for instance, internal pressure. The developed method for solving temperature and total stress distribution will be tested using the measured temperatures generated from a direct solution. Transient temperature and total stress distribution obtained from method presented below will be compared with the values obtained from the direct solution. Finally, the presented method will be applied in order to monitor temperature and stress distribution in an outlet header using the real measured temperature values at seven points on the header's outer surface during the power boiler's shut down operation. The presented method allows to optimize the power block's start-up and shut-down operations, contributes to the reduction of heat loss during these operations and to the extension of power block's life. The fatigue and creep usage factor can be computed in an on-line mode. The presented method herein can be applied to monitoring systems that work in conventional as well as in nuclear power plants. (author)
Stinchcomb, T.G.; Kuchnir, F.T.; Skaggs, L.S.
1980-01-01
Microdosimetric measurements of event-size spectra, made with a proportional counter, are being used increasingly for separation of dose components in mixed n-γ fields. Measurements in fields produced by 8.3 MeV deuteron bombardment of thick beryllium and deuterium targets were made in air and at 6 and 12 cm depth in water with a spherical tissue-equivalent (TE) proportional counter and with a pair of calibrated ion chambers (TE-TE and Mg-Ar). The dose results obtained with the two methods agree well for the neutron components, but the gamma components do not demonstrate consistent agreement. An important source of error in the microdosimetric method is the matching of the spectra measured at different gain settings to cover the large range of event sizes. The effect of this and other sources of error is analysed. (author)
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos
2016-09-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
Boomsma, Aaron; Troolin, Dan; Pothos, Stamatios; Bhattacharya, Sayantan; Vlachos, Pavlos
2016-01-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from
Reduction of system matrices of planar beam in ANCF by component mode synthesis method
Kobayashi, Nobuyuki; Wago, Tsubasa; Sugawara, Yoshiki
2011-01-01
A method of reducing the system matrices of a planar flexible beam described by an absolute nodal coordinate formulation (ANCF) is presented. In this method, we focus that the bending stiffness matrix expressed by adopting a continuum mechanics approach to the ANCF beam element is constant when the axial strain is not very large. This feature allows to apply the Craig–Bampton method to the equation of motion that is composed of the independent coordinates when the constraint forces are eliminated. Four numerical examples that compare the proposed method and the conventional ANCF are demonstrated to verify the performance and accuracy of the proposed method. From these examples, it is verified that the proposed method can describe the large deformation effects such as dynamic stiffening due to the centrifugal force, as well as the conventional ANCF does. The use of this method also reduces the computing time, while maintaining an acceptable degree of accuracy for the expression characteristics of the conventional ANCF when the modal truncation number is adequately employed. This reduction in CPU time particularly pronounced in the case of a large element number and small modal truncation number; the reduction can be verified not only in the case of small deformation but also in the case of a fair bit large deformation.
Msika, D.; Lafon, A.
1978-01-01
In the technology of liquid sodium cooled fast reactors, the necessary processes for washing and decontamination have been demonstrated. For sodium removal, different solutions have been considered and tested in France. The studies have been progressively oriented toward defining a process using a fine dispersion of water in a gas (atomization). The results obtained by that method on non-radioactive components were satisfactory insofar as the efficiency and safety of the operation was concerned. The purpose of decontaminating components from the reactor primary circuits is to reduce the level of surface activity to a level compatible with personnel access without biological shielding. The treatment Is comprised of two stages: (i) washing, to remove any residual sodium, and (ii) decontamination which alternately applies alkaline and acid solutions, to dissolve the deposited radionuclides without significant attack on the surface. The treatment, recently applied to components from in-service reactors, generally met the design objective. (author)
Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)
1993-01-01
On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.
Rajagopal, K. R.
1992-01-01
The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.
Lu Wang
2016-01-01
Full Text Available Multichannel microwave components are widely used and the active phased array antenna is a typical representative. The high power generated from T/R modules in active phased array antenna (APAA leads to the degradation of its electrical performances, which seriously restricts the development of high-performance APAA. Therefore, to meet the demand of thermal design for APAA, a multiobjective optimization design model of cold plate is proposed. Furthermore, in order to achieve temperature uniformity and case temperature restrictions of APAA simultaneously, optimization model of channel structure is developed. Besides, an airborne active phased array antenna was tested as an example to verify the validity of the optimization model. The valuable results provide important reference for engineers to enhance thermal design technology of antennas.
Mohammed Khair E. A. Al-Shwaiyat
2014-12-01
Full Text Available New approach has been proposed for the simultaneous determination of two reducing agents based on the dependence of their reaction rate with 18-molybdo-2-phosphate heteropoly complex on pH. The method was automated using the manifold typical for the sequential analysis method. Ascorbic acid and rutin were determined by successive injection of two samples acidified to different pH. The linear range for rutin determination was 0.6-20 mg/L and the detection limit was 0.2 mg/L (l = 1 cm. The determination of rutin was possible in the presence of up to a 20-fold excess of ascorbic acid. The method was successfully applied to the determination of ascorbic acid and rutin in ascorutin tablets. The applicability of the proposed method for the determination of total polyphenol content in natural plant samples was shown.
The Influence of Different Processing Methods on Component Content of Sophora japonica
Ji, Y. B.; Zhu, H. J.; Xin, G. S.; Wei, C.
2017-12-01
The purpose of this experiment is to understand the effect of different processing methods on the content of active ingredients in Sophora japonica, and to determine the content of rutin and quercetin in Sophora japonica under different processing methods by UV spectrophotometry of the content determination. So as to compare the effect of different processing methods on the active ingredient content of Sophora japonica. Experiments can be seen in the rutin content: Fried Sophora japonica>Vinegar sunburn Sophora> Health products Sophora japonica> Charred sophora flower, Vinegar sunburn Sophora and Fried Sophora japonica difference is not obvious; Quercetin content: Charred sophora flower> Fried Sophora japonica> Vinegar sunburn Sophora>Health products Sophora japonica. It is proved that there are some differences in the content of active ingredients in Sophora japonica in different processing methods. The content of rutin increased with the increase of the processing temperature, but the content decreased after a certain temperature; Quercetin content will increase gradually with time.
Method of and apparatus for use in joining tubular components and tube assemblies made thereby
Percival, S.R.
1979-01-01
A method of joining difficult to weld materials involves the forming of a rolled joint. A particular application is the joining of zirconium alloy calandria tubes to stainless steel tube-plates in a SGHWR. (UK)
Szczepanik, M.; Poteralski, A.
2016-11-01
The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.
Neveu, Pascaline; Priot, Anne-Emmanuelle; Philippe, Matthieu; Fuchs, Philippe; Roumes, Corinne
2015-09-01
Several tests are available to optometrists for investigating accommodation and vergence. This study sought to investigate the agreement between clinical and laboratory methods and to clarify which components are actually measured when tonic and cross-link of accommodation and vergence are assessed. Tonic vergence, tonic accommodation, accommodative vergence (AC/A) and vergence accommodation (CA/C) were measured using several tests. Clinical tests were compared to the laboratory assessment, the latter being regarded as an absolute reference. The repeatability of each test and the degree of agreement between the tests were quantified using Bland-Altman analysis. The values obtained for each test were found to be stable across repetitions; however, in most cases, significant differences were observed between tests supposed to measure the same oculomotor component. Tonic and cross-link components cannot be easily assessed because proximal and instrumental responses interfere with the assessment. Other components interfere with oculomotor assessment. Specifically, accommodative divergence interferes with tonic vergence estimation and the type of accommodation considered in the AC/A ratio affects its magnitude. Results on clinical tonic accommodation and clinical CA/C show that further investigation is needed to clarify the limitations associated with the use of difference of Gaussian as visual targets to open the accommodative loop. Although different optometric tests of accommodation and vergence rely on the same basic principles, the results of this study indicate that clinical and laboratory methods actually involve distinct components. These differences, which are induced by methodological choices, must be taken into account, when comparing studies or when selecting a test to investigate a particular oculomotor component. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Components made of corrosion-resistent zirconium alloy and method for its production
Hanneman, R.E.; Urquhart, A.W.; Vermilyea, D.A.
1977-01-01
The invention deals with a method to increase the resistance of zirconium alloys to blister corrosion which mainly occurs in boiling-water nuclear reactors. According to the method described, the surface of the alloy body is coated with a thin film of a suitable electronically conducting material. Gold, silver, platinum, nickel, chromium, iron and niobium are suitable as coating materials. The invention is more closely explained by means of examples. (GSC) [de
WAVELETS AND PRINCIPAL COMPONENT ANALYSIS METHOD FOR VIBRATION MONITORING OF ROTATING MACHINERY
Bendjama, Hocine; S. Boucherit, Mohamad
2017-01-01
Fault diagnosis is playing today a crucial role in industrial systems. To improve reliability, safety and efficiency advanced monitoring methods have become increasingly important for many systems. The vibration analysis method is essential in improving condition monitoring and fault diagnosis of rotating machinery. Effective utilization of vibration signals depends upon effectiveness of applied signal processing techniques. In this paper, fault diagnosis is performed using a com...
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this