WorldWideScience

Sample records for residual variance predicted

  1. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring

    of different power parameters had no effect on predictive ability. Partitioning of genomic variance showed that SNP groups either by position (intron, exon, downstream, upstream and 5’UTR) or by function (missense and protein-altering) had similar average explained variance per SNP, except that 3’UTR had...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL models had similar accuracy and bias as GBLUP method but use...

  2. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...

  3. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle

    NARCIS (Netherlands)

    Rönnegård, L.; Felleki, M.; Fikse, W.F.; Mulder, H.A.; Strandberg, E.

    2013-01-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this

  4. Assessment of heterogeneity of residual variances using changepoint techniques

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2000-07-01

    Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

  5. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  6. Normal linear models with genetically structured residual variance heterogeneity: a case study

    DEFF Research Database (Denmark)

    Sorensen, Daniel; Waagepetersen, Rasmus Plenge

    2003-01-01

    Normal mixed models with different levels of heterogeneity in the residual variance are fitted to pig litter size data. Exploratory analysis and model assessment is based on examination of various posterior predictive distributions. Comparisons based on Bayes factors and related criteria favour...... models with a genetically structured residual variance heterogeneity. There is, moreover, strong evidence of a negative correlation between the additive genetic values affecting litter size and those affecting residual variance. The models are also compared according to the purposes for which they might...... be used, such as prediction of 'future' data, inference about response to selection and ranking candidates for selection. A brief discussion is given of some implications for selection of the genetically structured residual variance model....

  7. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  8. Longitudinal Analysis of Residual Feed Intake in Mink using Random Regression with Heterogeneous Residual Variance

    DEFF Research Database (Denmark)

    Shirali, Mahmoud; Nielsen, Vivi Hunnicke; Møller, Steen Henrik

    Heritability of residual feed intake (RFI) increased from low to high over the growing period in male and female mink. The lowest heritability for RFI (male: 0.04 ± 0.01 standard deviation (SD); female: 0.05 ± 0.01 SD) was in early and the highest heritability (male: 0.33 ± 0.02; female: 0.34 ± 0.......02 SD) was achieved at the late growth stages. The genetic correlation between different growth stages for RFI showed a high association (0.91 to 0.98) between early and late growing periods. However, phenotypic correlations were lower from 0.29 to 0.50. The residual variances were substantially higher...

  9. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...

  10. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  11. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    ... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...

  12. Selection for uniformity in livestock by exploiting genetic heterogeneity of residual variance

    NARCIS (Netherlands)

    Mulder, H.A.; Veerkamp, R.F.; Vereijken, A.; Bijma, P.; Hill, W.G.

    2008-01-01

    some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters,

  13. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  14. Estimating the variance and integral scale of the transmissivity field using head residual increments

    Science.gov (United States)

    Zheng, Lingyun; Silliman, S.E.

    2000-01-01

    A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first-order solution is developed for the case of a transmissivity field which is isotropic and whose second-order behavior can be characterized by an exponential covariance structure. The estimates of the variance ??(Y)/2 and the integral scale ?? of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two-dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (??(Y)/2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both ?? and ??(Y)/2 in aquifers with the value of ??(Y)/2 up to 2.0, but the errors in those estimates were higher for ??(Y)/2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.

  15. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...

  16. Evaluation of residue-residue contact predictions in CASP9

    KAUST Repository

    Monastyrskyy, Bohdan

    2011-01-01

    This work presents the results of the assessment of the intramolecular residue-residue contact predictions submitted to CASP9. The methodology for the assessment does not differ from that used in previous CASPs, with two basic evaluation measures being the precision in recognizing contacts and the difference between the distribution of distances in the subset of predicted contact pairs versus all pairs of residues in the structure. The emphasis is placed on the prediction of long-range contacts (i.e., contacts between residues separated by at least 24 residues along sequence) in target proteins that cannot be easily modeled by homology. Although there is considerable activity in the field, the current analysis reports no discernable progress since CASP8.

  17. Evaluation of residue-residue contact prediction in CASP10

    KAUST Repository

    Monastyrskyy, Bohdan

    2013-08-31

    We present the results of the assessment of the intramolecular residue-residue contact predictions from 26 prediction groups participating in the 10th round of the CASP experiment. The most recently developed direct coupling analysis methods did not take part in the experiment likely because they require a very deep sequence alignment not available for any of the 114 CASP10 targets. The performance of contact prediction methods was evaluated with the measures used in previous CASPs (i.e., prediction accuracy and the difference between the distribution of the predicted contacts and that of all pairs of residues in the target protein), as well as new measures, such as the Matthews correlation coefficient, the area under the precision-recall curve and the ranks of the first correctly and incorrectly predicted contact. We also evaluated the ability to detect interdomain contacts and tested whether the difficulty of predicting contacts depends upon the protein length and the depth of the family sequence alignment. The analyses were carried out on the target domains for which structural homologs did not exist or were difficult to identify. The evaluation was performed for all types of contacts (short, medium, and long-range), with emphasis placed on long-range contacts, i.e. those involving residues separated by at least 24 residues along the sequence. The assessment suggests that the best CASP10 contact prediction methods perform at approximately the same level, and comparably to those participating in CASP9.

  18. Automatic prediction of catalytic residues by modeling residue structural neighborhood

    Directory of Open Access Journals (Sweden)

    Passerini Andrea

    2010-03-01

    Full Text Available Abstract Background Prediction of catalytic residues is a major step in characterizing the function of enzymes. In its simpler formulation, the problem can be cast into a binary classification task at the residue level, by predicting whether the residue is directly involved in the catalytic process. The task is quite hard also when structural information is available, due to the rather wide range of roles a functional residue can play and to the large imbalance between the number of catalytic and non-catalytic residues. Results We developed an effective representation of structural information by modeling spherical regions around candidate residues, and extracting statistics on the properties of their content such as physico-chemical properties, atomic density, flexibility, presence of water molecules. We trained an SVM classifier combining our features with sequence-based information and previously developed 3D features, and compared its performance with the most recent state-of-the-art approaches on different benchmark datasets. We further analyzed the discriminant power of the information provided by the presence of heterogens in the residue neighborhood. Conclusions Our structure-based method achieves consistent improvements on all tested datasets over both sequence-based and structure-based state-of-the-art approaches. Structural neighborhood information is shown to be responsible for such results, and predicting the presence of nearby heterogens seems to be a promising direction for further improvements.

  19. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  20. Prediction of machining induced residual stresses

    Science.gov (United States)

    Pramod, Monangi; Reddy, Yarkareddy Gopi; Prakash Marimuthu, K.

    2017-07-01

    Whenever a component is machined, residual stresses are induced in it. These residual stresses induced in the component reduce its fatigue life, corrosion resistance and wear resistance. Thus it is important to predict and control the machining-induced residual stress. A lot of research is being carried out in this area in the past decade. This paper aims at prediction of residual stresses during machining of Ti-6Al-4V. A model was developed and under various combinations of cutting conditions such as, speed, feed and depth of cut, the behavior of residual stresses were simulated using Finite Element Model. The present work deals with the development of thermo-mechanical model to predict the machining induced residual stresses in Titanium alloy. The simulation results are compared with the published results. The results are in good agreement with the published results. Future work involves optimization or the cutting parameters that effect the machining induced residual stresses. The results obtained were validated with previous work.

  1. Protein structure based prediction of catalytic residues

    Science.gov (United States)

    2013-01-01

    Background Worldwide structural genomics projects continue to release new protein structures at an unprecedented pace, so far nearly 6000, but only about 60% of these proteins have any sort of functional annotation. Results We explored a range of features that can be used for the prediction of functional residues given a known three-dimensional structure. These features include various centrality measures of nodes in graphs of interacting residues: closeness, betweenness and page-rank centrality. We also analyzed the distance of functional amino acids to the general center of mass (GCM) of the structure, relative solvent accessibility (RSA), and the use of relative entropy as a measure of sequence conservation. From the selected features, neural networks were trained to identify catalytic residues. We found that using distance to the GCM together with amino acid type provide a good discriminant function, when combined independently with sequence conservation. Using an independent test set of 29 annotated protein structures, the method returned 411 of the initial 9262 residues as the most likely to be involved in function. The output 411 residues contain 70 of the annotated 111 catalytic residues. This represents an approximately 14-fold enrichment of catalytic residues on the entire input set (corresponding to a sensitivity of 63% and a precision of 17%), a performance competitive with that of other state-of-the-art methods. Conclusions We found that several of the graph based measures utilize the same underlying feature of protein structures, which can be simply and more effectively captured with the distance to GCM definition. This also has the added the advantage of simplicity and easy implementation. Meanwhile sequence conservation remains by far the most influential feature in identifying functional residues. We also found that due the rapid changes in size and composition of sequence databases, conservation calculations must be recalibrated for specific

  2. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP that decompo......This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  3. Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance.

    Science.gov (United States)

    Baek, Eun Kyeng; Ferron, John M

    2013-03-01

    Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.

  4. Examining heterogeneity in residual variance to detect differential response to treatments.

    Science.gov (United States)

    Kim, Jinok; Seltzer, Michael

    2011-06-01

    Individual differences in response to treatments have been a long-standing interest in education, psychology, and related fields. This article presents a conceptual framework and hierarchical modeling strategies that may help identify the subgroups for whom, or the conditions under which, a particular treatment is associated with better outcomes. The framework discussed in this article shows how differences in residual dispersion between treatment and control group members can signal omitted individual characteristics that may interact with treatments (Bryk & Raudenbush, 1988) and sensitizes us to individual- and cluster-level confounders of inferences concerning dispersion in quasi-experimental studies in multilevel settings. Based on the framework, hierarchical modeling strategies are developed to uncover interactions between treatments and individual characteristics, which are readily applicable to various settings in multisite evaluation studies. These strategies entail jointly modeling the mean and dispersion structures in hierarchical models. We illustrate the implementation of this framework through fully Bayesian analyses of the data from a study of the effectiveness of a reform-minded mathematics curriculum. © 2011 American Psychological Association

  5. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  6. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  7. Quantitative milk genomics: estimation of variance components and prediction of fatty acids in bovine milk

    DEFF Research Database (Denmark)

    Krag, Kristian

    The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...... be estimated from SNP markers, with a performance similar to traditional pedigree approaches. The heritability and correlation estimates indicate, that the composition of saturated FA and unsaturated FA can be altered independently, though selection and regulations in feeding rgimes. For the prediction FA...

  8. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  9. Predicting the residual life of plant equipment - Why worry

    International Nuclear Information System (INIS)

    Jaske, C.E.

    1985-01-01

    Predicting the residual life of plant equipment that has been in service for 20 to 30 years or more is a major concern of many industries. This paper reviews the reasons for increased concern for residual-life assessment and the general procedures used in performing such assessments. Some examples and case histories illustrating procedures for assessing remaining service life are discussed. Areas where developments are needed to improve the technology for remaining-life estimation are pointed out. Then, some of the critical issues involved in residual-life assessment are identified. Finally, the future role of residual-life prediction is addressed

  10. Predicting coiled coils by use of pairwise residue correlations.

    OpenAIRE

    Berger, B; Wilson, D B; Wolf, E; Tonchev, T; Milla, M; Kim, P S

    1995-01-01

    A method is presented that predicts coiled-coil domains in protein sequences by using pairwise residue correlations obtained from a (two-stranded) coiled-coil database of 58,217 amino acid residues. A program called PAIRCOIL implements this method and is significantly better than existing methods at distinguishing coiled coils from alpha-helices that are not coiled coils. The database of pairwise residue correlations suggests structural features that stabilize or destabilize coiled coils.

  11. Efficient Coding of the Prediction Residual.

    Science.gov (United States)

    1979-12-27

    DPCM ) with an adaptive quantizer and an adaptive predictor using Kalman filtering. This work was improved upon by Cohn and Melsa [15] using adaptive...with an encoding method of Adaptive Delta Modulation (ADM) and an experimental method of encoding the residual by DPCM . This is referred to as a...S. "Digital Coding of Speech Waveforms - PCM, DPCM and OM Quantizers." Proceedings of the IEEE, Vol. 62, No. 5 (1974), 611-632. (110) Tribolet, J. M

  12. Identification of residue pairing in interacting β-strands from a predicted residue contact map.

    Science.gov (United States)

    Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng

    2018-04-19

    Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .

  13. Genomic Prediction Within and Across Biparental Families: Means and Variances of Prediction Accuracy and Usefulness of Deterministic Equations

    Directory of Open Access Journals (Sweden)

    Pascal Schopp

    2017-11-01

    Full Text Available A major application of genomic prediction (GP in plant breeding is the identification of superior inbred lines within families derived from biparental crosses. When models for various traits were trained within related or unrelated biparental families (BPFs, experimental studies found substantial variation in prediction accuracy (PA, but little is known about the underlying factors. We used SNP marker genotypes of inbred lines from either elite germplasm or landraces of maize (Zea mays L. as parents to generate in silico 300 BPFs of doubled-haploid lines. We analyzed PA within each BPF for 50 simulated polygenic traits, using genomic best linear unbiased prediction (GBLUP models trained with individuals from either full-sib (FSF, half-sib (HSF, or unrelated families (URF for various sizes (Ntrain of the training set and different heritabilities (h2 . In addition, we modified two deterministic equations for forecasting PA to account for inbreeding and genetic variance unexplained by the training set. Averaged across traits, PA was high within FSF (0.41–0.97 with large variation only for Ntrain < 50 and h2 < 0.6. For HSF and URF, PA was on average ∼40–60% lower and varied substantially among different combinations of BPFs used for model training and prediction as well as different traits. As exemplified by HSF results, PA of across-family GP can be very low if causal variants not segregating in the training set account for a sizeable proportion of the genetic variance among predicted individuals. Deterministic equations accurately forecast the PA expected over many traits, yet cannot capture trait-specific deviations. We conclude that model training within BPFs generally yields stable PA, whereas a high level of uncertainty is encountered in across-family GP. Our study shows the extent of variation in PA that must be at least reckoned with in practice and offers a starting point for the design of training sets composed of multiple BPFs.

  14. Estimating parameter and predictive uncertainty when model residuals are correlated, heteroscedastic, and non-Gaussian

    Science.gov (United States)

    Schoups, Gerrit; Vrugt, Jasper A.

    2010-05-01

    Estimation of parameter and predictive uncertainty of hydrologic models usually relies on the assumption of additive residual errors that are independent and identically distributed according to a normal distribution with a mean of zero and a constant variance. Here, we investigate to what extent estimates of parameter and predictive uncertainty are affected when these assumptions are relaxed. Parameter and predictive uncertainty are estimated by Monte Carlo Markov Chain sampling from a generalized likelihood function that accounts for correlation, heteroscedasticity, and non-normality of residual errors. Application to rainfall-runoff modeling using daily data from a humid basin reveals that: (i) residual errors are much better described by a heteroscedastic, first-order auto-correlated error model with a Laplacian density characterized by heavier tails than a Gaussian density, and (ii) proper representation of the statistical distribution of residual errors yields tighter predictive uncertainty bands and more physically realistic parameter estimates that are less sensitive to the particular time period used for inference. The latter is especially useful for regionalization and extrapolation of parameter values to ungauged basins. Application to daily rainfall-runoff data from a semi-arid basin shows that allowing skew in the error distribution yields improved estimates of predictive uncertainty when flows are close to zero.

  15. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  16. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  17. Application of the contour method to validate residual stress predictions

    International Nuclear Information System (INIS)

    Welding is the most widespread method employed to join metallic components in nuclear power plants. This is an aggressive process that introduces complex three-dimensional residual stresses of substantial magnitude into engineering components. For safety-critical applications it can be of crucial importance to have an accurate characterisation of the residual stress field present in order to assess plant lifetime and risk of failure. Finite element modelling approaches are being increasingly employed by engineers to predict welding residual stresses. However, such predictions are challenging owing to the innate complexity of the welding process and can give highly variable results. Therefore, it is always desirable to validate residual stress predictions by experimental data. This paper illustrates how the contour method of measuring residual stress can be applied to various weldments in order to provide high quality experimental data. The contour method results are compared with data obtained by other well-established residual stress measurement techniques such as neutron diffraction and slitting methods and show a very satisfactory correlation. (author)

  18. Residual Strength Prediction of Debond Damaged Sandwich Panels

    DEFF Research Database (Denmark)

    Berggreen, Carl Christian

    propagation and initiation, as these mechanisms are governing for the overall failure load of the structure. Thus, this presentation will describe the development, validation and application of a FEM based numerical model for prediction of residual strength of damaged sandwich panels. The core......This presentation concerns theoretical and experimental prediction of crack propagation and residual strength of debond damaged sandwich panels. It is evident that in order to achieve highly optimised structures which are able to operate in a stochastic loading environment, damage tolerance...... evaluation based on residual strength prediction is needed. Is a given damage critical for the structural integrity needing immanent repair, or is the damage negligible, where repair can be postponed to the next inspection? These questions are generally interesting for all types of structures...

  19. Prediction of breeding values and selection responses with genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework

  20. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , which results in a high operating cost. For this case, a two-stage extension of the mean-variance approach provides the best trade-off between the expected cost and its variance. It is demonstrated that by using a constraint back-off technique in the specific case study, certainty equivalence EMPC can...

  1. Efeitos da Heterogeneidade de Variância Residual entre Grupos de Contemporâneos na Avaliação Genética de Bovinos de Corte Effects of Heterogeneity of Residual Variance among Contemporary Groups on Genetic Evaluation of Beef Cattle

    Directory of Open Access Journals (Sweden)

    Roberto Carvalheiro

    2002-07-01

    variances (R = Isigmae². Different data sets of postweaning weight gain, adjusted to 345 days, were simulated with and without heterogeneity of residual variance, using a phenotypic variance of 300 kg² and a true heritability of 0.4. A real data set was used to provide the CG and parents related to each animal observation. Results showed that, when high levels of heterogeneity of residual variance were considered, animals were selected from CG with higher variability, especially with intense selection. With respect to prediction consistency, non parent animals and cows had their predicted breeding values more affected by heterogeneity of residual variance than sires. The weighed factor used reduced, but did not eliminate, the effect of heterogeneity of residual variance. The results of weighted genetic evaluations were similar or superior to those from evaluations that assumed homogeneity of variances. Even when the variances were homogeneous, the weighed genetic evaluations yielded results that were not inferior than those from the usual evaluations, which assumed homogeneity of variances.

  2. Combining specificity determining and conserved residues improves functional site prediction

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2009-06-01

    Full Text Available Abstract Background Predicting the location of functionally important sites from protein sequence and/or structure is a long-standing problem in computational biology. Most current approaches make use of sequence conservation, assuming that amino acid residues conserved within a protein family are most likely to be functionally important. Most often these approaches do not consider many residues that act to define specific sub-functions within a family, or they make no distinction between residues important for function and those more relevant for maintaining structure (e.g. in the hydrophobic core. Many protein families bind and/or act on a variety of ligands, meaning that conserved residues often only bind a common ligand sub-structure or perform general catalytic activities. Results Here we present a novel method for functional site prediction based on identification of conserved positions, as well as those responsible for determining ligand specificity. We define Specificity-Determining Positions (SDPs, as those occupied by conserved residues within sub-groups of proteins in a family having a common specificity, but differ between groups, and are thus likely to account for specific recognition events. We benchmark the approach on enzyme families of known 3D structure with bound substrates, and find that in nearly all families residues predicted by SDPsite are in contact with the bound substrate, and that the addition of SDPs significantly improves functional site prediction accuracy. We apply SDPsite to various families of proteins containing known three-dimensional structures, but lacking clear functional annotations, and discusse several illustrative examples. Conclusion The results suggest a better means to predict functional details for the thousands of protein structures determined prior to a clear understanding of molecular function.

  3. Development of residual stress prediction model in pipe weldment

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Yun Yong; Lim, Se Young; Choi, Kang Hyeuk; Cho, Young Sam; Lim, Jae Hyuk [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2002-03-15

    When Leak Before Break(LBB) concepts is applied to high energy piping of nuclear power plants, residual weld stresses is a important variable. The main purpose of his research is to develop the numerical model which can predict residual weld stresses. Firstly, basic theories were described which need to numerical analysis of welding parts. Before the analysis of pipe, welding of a flat plate was analyzed and compared. Appling the data of used pipes, thermal/mechanical analysis were accomplished and computed temperature gradient and residual stress distribution. For thermal analysis, proper heat flux was regarded as the heat source and convection/radiation heat transfer were considered at surfaces. The residual stresses were counted from the computed temperature gradient and they were compared and verified with a result of another research.

  4. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  5. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  6. Predicting logging residues: an interim equation for Appalachian oak sawtimber

    Science.gov (United States)

    A. Jeff Martin

    1975-01-01

    An equation, using dbh, dbh², bole length, and sawlog height to predict the cubic-foot volume of logging residue per tree, was developed from data collected on 36 mixed oaks in southwestern Virginia. The equation produced reliable results for small sawtimber trees, but additional research is needed for other species, sites, and utilization practices.

  7. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    Science.gov (United States)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  8. ResBoost: characterizing and predicting catalytic residues in enzymes

    Directory of Open Access Journals (Sweden)

    Freund Yoav

    2009-06-01

    Full Text Available Abstract Background Identifying the catalytic residues in enzymes can aid in understanding the molecular basis of an enzyme's function and has significant implications for designing new drugs, identifying genetic disorders, and engineering proteins with novel functions. Since experimentally determining catalytic sites is expensive, better computational methods for identifying catalytic residues are needed. Results We propose ResBoost, a new computational method to learn characteristics of catalytic residues. The method effectively selects and combines rules of thumb into a simple, easily interpretable logical expression that can be used for prediction. We formally define the rules of thumb that are often used to narrow the list of candidate residues, including residue evolutionary conservation, 3D clustering, solvent accessibility, and hydrophilicity. ResBoost builds on two methods from machine learning, the AdaBoost algorithm and Alternating Decision Trees, and provides precise control over the inherent trade-off between sensitivity and specificity. We evaluated ResBoost using cross-validation on a dataset of 100 enzymes from the hand-curated Catalytic Site Atlas (CSA. Conclusion ResBoost achieved 85% sensitivity for a 9.8% false positive rate and 73% sensitivity for a 5.7% false positive rate. ResBoost reduces the number of false positives by up to 56% compared to the use of evolutionary conservation scoring alone. We also illustrate the ability of ResBoost to identify recently validated catalytic residues not listed in the CSA.

  9. Predicting the residual aluminum level in water treatment process

    OpenAIRE

    J. Tomperi; M. Pelo; K. Leiviskä

    2013-01-01

    In water treatment processes, aluminum salts are widely used as coagulation chemical. High dose of aluminum has been proved to be at least a minor health risk and some evidence points out that aluminum could increase the risk of Alzheimer's disease. Thus it is important to minimize the amount of residual aluminum in drinking water and water used at food industry. In this study, the data of a water treatment plant (WTP) was analyzed and the residual aluminum in drinking water was predicted usi...

  10. Predicting the residual aluminum level in water treatment process

    OpenAIRE

    J. Tomperi; M. Pelo; K. Leiviskä

    2012-01-01

    In water treatment processes, aluminum salts are widely used as coagulation chemical. High dose of aluminum has been proved to be at least a minor health risk and some evidence points out that aluminum could increase the risk of Alzheimer's disease thus it is important to minimize the amount of residual aluminum in drinking water and water used at food industry. In this study, the data of a water treatment plant (WTP) was analyzed and the residual aluminum in drinking water was predicted usin...

  11. Prediction of residual stress using explicit finite element method

    Directory of Open Access Journals (Sweden)

    W.A. Siswanto

    2015-12-01

    Full Text Available This paper presents the residual stress behaviour under various values of friction coefficients and scratching displacement amplitudes. The investigation is based on numerical solution using explicit finite element method in quasi-static condition. Two different aeroengine materials, i.e. Super CMV (Cr-Mo-V and Titanium alloys (Ti-6Al-4V, are examined. The usage of FEM analysis in plate under normal contact is validated with Hertzian theoretical solution in terms of contact pressure distributions. The residual stress distributions along with normal and shear stresses on elastic and plastic regimes of the materials are studied for a simple cylinder-on-flat contact configuration model subjected to normal loading, scratching and followed by unloading. The investigated friction coefficients are 0.3, 0.6 and 0.9, while scratching displacement amplitudes are 0.05 mm, 0.10 mm and 0.20 mm respectively. It is found that friction coefficient of 0.6 results in higher residual stress for both materials. Meanwhile, the predicted residual stress is proportional to the scratching displacement amplitude, higher displacement amplitude, resulting in higher residual stress. It is found that less residual stress is predicted on Super CMV material compared to Ti-6Al-4V material because of its high yield stress and ultimate strength. Super CMV material with friction coefficient of 0.3 and scratching displacement amplitude of 0.10 mm is recommended to be used in contact engineering applications due to its minimum possibility of fatigue.

  12. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    Science.gov (United States)

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  13. Bioinformatic prediction and in vivo validation of residue-residue interactions in human proteins

    Science.gov (United States)

    Jordan, Daniel; Davis, Erica; Katsanis, Nicholas; Sunyaev, Shamil

    2014-03-01

    Identifying residue-residue interactions in protein molecules is important for understanding both protein structure and function in the context of evolutionary dynamics and medical genetics. Such interactions can be difficult to predict using existing empirical or physical potentials, especially when residues are far from each other in sequence space. Using a multiple sequence alignment of 46 diverse vertebrate species we explore the space of allowed sequences for orthologous protein families. Amino acid changes that are known to damage protein function allow us to identify specific changes that are likely to have interacting partners. We fit the parameters of the continuous-time Markov process used in the alignment to conclude that these interactions are primarily pairwise, rather than higher order. Candidates for sites under pairwise epistasis are predicted, which can then be tested by experiment. We report the results of an initial round of in vivo experiments in a zebrafish model that verify the presence of multiple pairwise interactions predicted by our model. These experimentally validated interactions are novel, distant in sequence, and are not readily explained by known biochemical or biophysical features.

  14. Improving residue-residue contact prediction via low-rank and sparse decomposition of residue correlation matrix.

    Science.gov (United States)

    Zhang, Haicang; Gao, Yujuan; Deng, Minghua; Wang, Chao; Zhu, Jianwei; Li, Shuai Cheng; Zheng, Wei-Mou; Bu, Dongbo

    2016-03-25

    Strategies for correlation analysis in protein contact prediction often encounter two challenges, namely, the indirect coupling among residues, and the background correlations mainly caused by phylogenetic biases. While various studies have been conducted on how to disentangle indirect coupling, the removal of background correlations still remains unresolved. Here, we present an approach for removing background correlations via low-rank and sparse decomposition (LRS) of a residue correlation matrix. The correlation matrix can be constructed using either local inference strategies (e.g., mutual information, or MI) or global inference strategies (e.g., direct coupling analysis, or DCA). In our approach, a correlation matrix was decomposed into two components, i.e., a low-rank component representing background correlations, and a sparse component representing true correlations. Finally the residue contacts were inferred from the sparse component of correlation matrix. We trained our LRS-based method on the PSICOV dataset, and tested it on both GREMLIN and CASP11 datasets. Our experimental results suggested that LRS significantly improves the contact prediction precision. For example, when equipped with the LRS technique, the prediction precision of MI and mfDCA increased from 0.25 to 0.67 and from 0.58 to 0.70, respectively (Top L/10 predicted contacts, sequence separation: 5 AA, dataset: GREMLIN). In addition, our LRS technique also consistently outperforms the popular denoising technique APC (average product correction), on both local (MI_LRS: 0.67 vs MI_APC: 0.34) and global measures (mfDCA_LRS: 0.70 vs mfDCA_APC: 0.67). Interestingly, we found out that when equipped with our LRS technique, local inference strategies performed in a comparable manner to that of global inference strategies, implying that the application of LRS technique narrowed down the performance gap between local and global inference strategies. Overall, our LRS technique greatly facilitates

  15. CCMpred--fast and precise prediction of protein residue-residue contacts from correlated mutations.

    Science.gov (United States)

    Seemayer, Stefan; Gruber, Markus; Söding, Johannes

    2014-11-01

    Recent breakthroughs in protein residue-residue contact prediction have made reliable de novo prediction of protein structures possible. The key was to apply statistical methods that can distinguish direct couplings between pairs of columns in a multiple sequence alignment from merely correlated pairs, i.e. to separate direct from indirect effects. Two classes of such methods exist, either relying on regularized inversion of the covariance matrix or on pseudo-likelihood maximization (PLM). Although PLM-based methods offer clearly higher precision, available tools are not sufficiently optimized and are written in interpreted languages that introduce additional overheads. This impedes the runtime and large-scale contact prediction for larger protein families, multi-domain proteins and protein-protein interactions. Here we introduce CCMpred, our performance-optimized PLM implementation in C and CUDA C. Using graphics cards in the price range of current six-core processors, CCMpred can predict contacts for typical alignments 35-113 times faster and with the same precision as the most accurate published methods. For users without a CUDA-capable graphics card, CCMpred can also run in a CPU mode that is still 4-14 times faster. Thanks to our speed-ups (http://dictionary.cambridge.org/dictionary/british/speed-up) contacts for typical protein families can be predicted in 15-60 s on a consumer-grade GPU and 1-6 min on a six-core CPU. CCMpred is free and open-source software under the GNU Affero General Public License v3 (or later) available at https://bitbucket.org/soedinglab/ccmpred. © The Author 2014. Published by Oxford University Press.

  16. VR-BFDT: A variance reduction based binary fuzzy decision tree induction method for protein function prediction.

    Science.gov (United States)

    Golzari, Fahimeh; Jalili, Saeed

    2015-07-21

    In protein function prediction (PFP) problem, the goal is to predict function of numerous well-sequenced known proteins whose function is not still known precisely. PFP is one of the special and complex problems in machine learning domain in which a protein (regarded as instance) may have more than one function simultaneously. Furthermore, the functions (regarded as classes) are dependent and also are organized in a hierarchical structure in the form of a tree or directed acyclic graph. One of the common learning methods proposed for solving this problem is decision trees in which, by partitioning data into sharp boundaries sets, small changes in the attribute values of a new instance may cause incorrect change in predicted label of the instance and finally misclassification. In this paper, a Variance Reduction based Binary Fuzzy Decision Tree (VR-BFDT) algorithm is proposed to predict functions of the proteins. This algorithm just fuzzifies the decision boundaries instead of converting the numeric attributes into fuzzy linguistic terms. It has the ability of assigning multiple functions to each protein simultaneously and preserves the hierarchy consistency between functional classes. It uses the label variance reduction as splitting criterion to select the best "attribute-value" at each node of the decision tree. The experimental results show that the overall performance of the proposed algorithm is promising. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Predictive hydrogeochemical modelling of bauxite residue sand in field conditions.

    Science.gov (United States)

    Wissmeier, Laurin; Barry, David A; Phillips, Ian R

    2011-07-15

    The suitability of residue sand (the coarse fraction remaining from Bayer's process of bauxite refining) for constructing the surface cover of closed bauxite residue storage areas was investigated. Specifically, its properties as a medium for plant growth are of interest to ensure residue sand can support a sustainable ecosystem following site closure. The geochemical evolution of the residue sand under field conditions, its plant nutrient status and soil moisture retention were studied by integrated modelling of geochemical and hydrological processes. For the parameterization of mineral reactions, amounts and reaction kinetics of the mineral phases natron, calcite, tricalcium aluminate, sodalite, muscovite and analcime were derived from measured acid neutralization curves. The effective exchange capacity for ion adsorption was measured using three independent exchange methods. The geochemical model, which accounts for mineral reactions, cation exchange and activity corrected solution speciation, was formulated in the geochemical modelling framework PHREEQC, and partially validated in a saturated-flow column experiment. For the integration of variably saturated flow with multi-component solute transport in heterogeneous 2D domains, a coupling of PHREEQC with the multi-purpose finite-element solver COMSOL was established. The integrated hydrogeochemical model was applied to predict water availability and quality in a vertical flow lysimeter and a cover design for a storage facility using measured time series of rainfall and evaporation from southwest Western Australia. In both scenarios the sand was fertigated and gypsum-amended. Results show poor long-term retention of fertilizer ions and buffering of the pH around 10 for more than 5 y of leaching. It was concluded that fertigation, gypsum amendment and rainfall leaching alone were insufficient to render the geochemical conditions of residue sand suitable for optimal plant growth within the given timeframe. The

  18. Demographic Factors and Hospital Size Predict Patient Satisfaction Variance- Implications for Hospital Value-Based Purchasing

    Science.gov (United States)

    McFarland, Daniel C.; Ornstein, Katherine; Holcombe, Randall F.

    2016-01-01

    Background Hospital Value-Based Purchasing (HVBP) incentivizes quality performance based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population dense areas and could bias CMS reimbursement. Objective Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Design Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized four highly predictive variables and hospitals were re-ranked accordingly. Setting 3,907 HVBP-participating hospitals. Patients 934,800 patient surveys, by most conservative estimate. Measurements 3,144 county demographics (U.S. Census), and HCAHPS. Results Hospital size and primary language (‘non-English speaking’) most strongly predicted unfavorable HCAHPS scores while education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals’ locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Conclusions Demographic and structural factors (e.g., hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. PMID:25940305

  19. Importance of the macroeconomic variables for variance prediction: A GARCH-MIDAS approach

    DEFF Research Database (Denmark)

    Asgharian, Hossein; Hou, Ai Jun; Javed, Farrukh

    2013-01-01

    This paper aims to examine the role of macroeconomic variables in forecasting the return volatility of the US stock market. We apply the GARCH-MIDAS (Mixed Data Sampling) model to examine whether information contained in macroeconomic variables can help to predict short-term and long-term compone......This paper aims to examine the role of macroeconomic variables in forecasting the return volatility of the US stock market. We apply the GARCH-MIDAS (Mixed Data Sampling) model to examine whether information contained in macroeconomic variables can help to predict short-term and long...

  20. Modelos de regressão aleatória com diferentes estruturas de variância residual para descrever o tamanho da leitegada Random regression models with different residual variance structures for describing litter size in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante-Neto

    2011-12-01

    Full Text Available Objetivou-se comparar modelos de regressão aleatória com diferentes estruturas de variância residual, a fim de se buscar a melhor modelagem para a característica tamanho da leitegada ao nascer (TLN. Utilizaram-se 1.701 registros de TLN, que foram analisados por meio de modelo animal, unicaracterística, de regressão aleatória. As regressões fixa e aleatórias foram representadas por funções contínuas sobre a ordem de parto, ajustadas por polinômios ortogonais de Legendre de ordem 3. Para averiguar a melhor modelagem para a variância residual, considerou-se a heterogeneidade de variância por meio de 1 a 7 classes de variância residual. O modelo geral de análise incluiu grupo de contemporâneo como efeito fixo; os coeficientes de regressão fixa para modelar a trajetória média da população; os coeficientes de regressão aleatória do efeito genético aditivo-direto, do comum-de-leitegada e do de ambiente permanente de animal; e o efeito aleatório residual. O teste da razão de verossimilhança, o critério de informação de Akaike e o critério de informação bayesiano de Schwarz apontaram o modelo que considerou homogeneidade de variância como o que proporcionou melhor ajuste aos dados utilizados. As herdabilidades obtidas foram próximas a zero (0,002 a 0,006. O efeito de ambiente permanente foi crescente da 1ª (0,06 à 5ª (0,28 ordem, mas decrescente desse ponto até a 7ª ordem (0,18. O comum-de-leitegada apresentou valores baixos (0,01 a 0,02. A utilização de homogeneidade de variância residual foi mais adequada para modelar as variâncias associadas à característica tamanho da leitegada ao nascer nesse conjunto de dado.The objective of this work was to compare random regression models with different residual variance structures, so as to obtain the best modeling for the trait litter size at birth (LSB in swine. One thousand, seven hundred and one records of LSB were analyzed. LSB was analyzed by means of a

  1. A longitudinal study on dual-tasking effects on gait: cognitive change predicts gait variance in the elderly.

    Science.gov (United States)

    MacAulay, Rebecca K; Brouillette, Robert M; Foil, Heather C; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2014-01-01

    Neuropsychological abilities have found to explain a large proportion of variance in objective measures of walking gait that predict both dementia and falling within the elderly. However, to this date there has been little research on the interplay between changes in these neuropsychological processes and walking gait overtime. To our knowledge, the present study is the first to investigate intra-individual changes in neurocognitive test performance and gait step time at two-time points across a one-year span. Neuropsychological test scores from 440 elderly individuals deemed cognitively normal at Year One were analyzed via repeated measures t-tests to assess for decline in cognitive performance at Year Two. 34 of these 440 individuals neuropsychological test performance significantly declined at Year Two; whereas the "non-decliners" displayed improved memory, working memory, attention/processing speed test performance. Neuropsychological test scores were also submitted to factor analysis at both time points for data reduction purposes and to assess the factor stability overtime. Results at Year One yielded a three-factor solution: Language/Memory, Executive Attention/Processing Speed, and Working Memory. Year Two's test scores also generated a three-factor solution (Working Memory, Language/Executive Attention/Processing Speed, and Memory). Notably, language measures loaded on Executive Attention/Processing Speed rather than on the Memory factor at Year Two. Hierarchal multiple regression revealed that both Executive Attention/Processing Speed and sex significantly predicted variance in dual task step time at both time points. Remarkably, in the "decliners", the magnitude of the contribution of the neuropsychological characteristics to gait variance significantly increased at Year Two. In summary, this study provides longitudinal evidence of the dynamic relationship between intra-individual cognitive change and its influence on dual task gait step time. These

  2. Leptonic Dirac CP violation predictions from residual discrete symmetries

    Directory of Open Access Journals (Sweden)

    I. Girardi

    2016-01-01

    Full Text Available Assuming that the observed pattern of 3-neutrino mixing is related to the existence of a (lepton flavour symmetry, corresponding to a non-Abelian discrete symmetry group Gf, and that Gf is broken to specific residual symmetries Ge and Gν of the charged lepton and neutrino mass terms, we derive sum rules for the cosine of the Dirac phase δ of the neutrino mixing matrix U. The residual symmetries considered are: i Ge=Z2 and Gν=Zn, n>2 or Zn×Zm, n,m≥2; ii Ge=Zn, n>2 or Zn×Zm, n,m≥2 and Gν=Z2; iii Ge=Z2 and Gν=Z2; iv Ge is fully broken and Gν=Zn, n>2 or Zn×Zm, n,m≥2; and v Ge=Zn, n>2 or Zn×Zm, n,m≥2 and Gν is fully broken. For given Ge and Gν, the sum rules for cos⁡δ thus derived are exact, within the approach employed, and are valid, in particular, for any Gf containing Ge and Gν as subgroups. We identify the cases when the value of cos⁡δ cannot be determined, or cannot be uniquely determined, without making additional assumptions on unconstrained parameters. In a large class of cases considered the value of cos⁡δ can be unambiguously predicted once the flavour symmetry Gf is fixed. We present predictions for cos⁡δ in these cases for the flavour symmetry groups Gf=S4, A4, T′ and A5, requiring that the measured values of the 3-neutrino mixing parameters sin2⁡θ12, sin2⁡θ13 and sin2⁡θ23, taking into account their respective 3σ uncertainties, are successfully reproduced.

  3. Prediction of residual metabolic activity after treatment in NSCLC patients

    International Nuclear Information System (INIS)

    Rios Velazquez, Emmanuel; Aerts, Hugo J.W.L.; Oberije, Cary; Ruysscher, Dirk De; Lambin, Philippe

    2010-01-01

    Purpose. Metabolic response assessment is often used as a surrogate of local failure and survival. Early identification of patients with residual metabolic activity is essential as this enables selection of patients who could potentially benefit from additional therapy. We report on the development of a pre-treatment prediction model for metabolic response using patient, tumor and treatment factors. Methods. One hundred and one patients with inoperable NSCLC (stage I-IV), treated with 3D conformal radical (chemo)-radiotherapy were retrospectively included in this study. All patients received a pre and post-radiotherapy fluorodeoxyglucose positron emission tomography-computed tomography FDG-PET-CT scan. The electronic medical record system and the medical patient charts were reviewed to obtain demographic, clinical, tumor and treatment data. Primary outcome measure was examined using a metabolic response assessment on a post-radiotherapy FDG-PET-CT scan. Radiotherapy was delivered in fractions of 1.8 Gy, twice a day, with a median prescribed dose of 60 Gy. Results. Overall survival was worse in patients with residual metabolic active areas compared with the patients with a complete metabolic response (p=0.0001). In univariate analysis, three variables were significantly associated with residual disease: larger primary gross tumor volume (GTVprimary, p=0.002), higher pre-treatment maximum standardized uptake value (SUV max , p=0.0005) in the primary tumor and shorter overall treatment time (OTT, p=0.046). A multivariate model including GTVprimary, SUV max , equivalent radiation dose at 2 Gy corrected for time (EQD2, T) and OTT yielded an area under the curve assessed by the leave-one-out cross validation of 0.71 (95% CI, 0.65-0.76). Conclusion. Our results confirmed the validity of metabolic response assessment as a surrogate of survival. We developed a multivariate model that is able to identify patients at risk of residual disease. These patients may benefit from

  4. Cortisol and politics: variance in voting behavior is predicted by baseline cortisol levels.

    Science.gov (United States)

    French, Jeffrey A; Smith, Kevin B; Alford, John R; Guck, Adam; Birnie, Andrew K; Hibbing, John R

    2014-06-22

    Participation in electoral politics is affected by a host of social and demographics variables, but there is growing evidence that biological predispositions may also play a role in behavior related to political involvement. We examined the role of individual variation in hypothalamic-pituitary-adrenal (HPA) stress axis parameters in explaining differences in self-reported and actual participation in political activities. Self-reported political activity, religious participation, and verified voting activity in U.S. national elections were collected from 105 participants, who were subsequently exposed to a standardized (nonpolitical) psychosocial stressor. We demonstrated that lower baseline salivary cortisol in the late afternoon was significantly associated with increased actual voting frequency in six national elections, but not with self-reported non-voting political activity. Baseline cortisol predicted significant variation in voting behavior above and beyond variation accounted for by traditional demographic variables (particularly age of participant in our sample). Participation in religious activity was weakly (and negatively) associated with baseline cortisol. Our results suggest that HPA-mediated characteristics of social, cognitive, and emotional processes may exert an influence on a trait as complex as voting behavior, and that cortisol is a better predictor of actual voting behavior, as opposed to self-reported political activity. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. finite element model for predicting residual stresses in shielded

    African Journals Online (AJOL)

    eobe

    Diffractometer (XRD 6000). From the Finite Element Model Simulation, the transverse residual stress in the x ... Keywords: Residual stress, 3D FEM, Shielded manual metal arc welding, Low Carbon Steel (ASTM A36), X-Ray diffraction, degree of ..... I. ''Residual stress effects on fatigue life of welded structures using LEFM'',.

  6. Residual pulmonary vasodilative reserve predicts outcome in idiopathic pulmonary hypertension.

    Science.gov (United States)

    Leuchte, Hanno H; Baezner, Carlos; Baumgartner, Rainer A; Muehling, Olaf; Neurohr, Claus; Behr, Juergen

    2015-06-01

    Idiopathic pulmonary arterial hypertension (IPAH) remains a devastating and incurable, albeit treatable condition. Treatment response is not uniform and parameters that help to anticipate a rather benign or a malignant course of the disease are warranted. Acute pulmonary vasoreactivity testing during right heart catheterisation is recommended to identify a minority of patients with IPAH with sustained response to calcium channel blocker therapy. This study aimed to evaluate the prognostic significance of a residual pulmonary vasodilative reserve in patients with IPAH not meeting current vasoresponder criteria. Observational right heart catheter study in 66 (n=66) patients with IPAH not meeting current vasoresponse criteria. Pulmonary vasodilative reserve was assessed by inhalation of 5 µg iloprost-aerosol. Sixty-six (n=66) of 72 (n=72) patients with IPAH did not meet current definition criteria assessed during vasodilator testing to assess pulmonary vasodilatory reserve. In those, iloprost-aerosol caused a reduction of mean pulmonary artery pressure (Δ pulmonary artery pressure-11.4%; p<0.001) and increased cardiac output (Δ cardiac output +16.7%; p<0.001), resulting in a reduction of pulmonary vascular resistance (Δ pulmonary vascular resistance-25%; p<0.001). The magnitude of this response was pronounced in surviving patients. A pulmonary vascular resistance reduction of ≥30% turned out to predict outcome in patients with IPAH. Residual pulmonary vasodilative reserve during acute vasodilator testing is of prognostic relevance in patients with IPAH not meeting current definitions of acute vasoreactivity. Therefore vasoreactivity testing holds more information than currently used. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  8. CSmetaPred: a consensus method for prediction of catalytic residues.

    Science.gov (United States)

    Choudhary, Preeti; Kumar, Shailesh; Bachhawat, Anand Kumar; Pandit, Shashi Bhushan

    2017-12-22

    Knowledge of catalytic residues can play an essential role in elucidating mechanistic details of an enzyme. However, experimental identification of catalytic residues is a tedious and time-consuming task, which can be expedited by computational predictions. Despite significant development in active-site prediction methods, one of the remaining issues is ranked positions of putative catalytic residues among all ranked residues. In order to improve ranking of catalytic residues and their prediction accuracy, we have developed a meta-approach based method CSmetaPred. In this approach, residues are ranked based on the mean of normalized residue scores derived from four well-known catalytic residue predictors. The mean residue score of CSmetaPred is combined with predicted pocket information to improve prediction performance in meta-predictor, CSmetaPred_poc. Both meta-predictors are evaluated on two comprehensive benchmark datasets and three legacy datasets using Receiver Operating Characteristic (ROC) and Precision Recall (PR) curves. The visual and quantitative analysis of ROC and PR curves shows that meta-predictors outperform their constituent methods and CSmetaPred_poc is the best of evaluated methods. For instance, on CSAMAC dataset CSmetaPred_poc (CSmetaPred) achieves highest Mean Average Specificity (MAS), a scalar measure for ROC curve, of 0.97 (0.96). Importantly, median predicted rank of catalytic residues is the lowest (best) for CSmetaPred_poc. Considering residues ranked ≤20 classified as true positive in binary classification, CSmetaPred_poc achieves prediction accuracy of 0.94 on CSAMAC dataset. Moreover, on the same dataset CSmetaPred_poc predicts all catalytic residues within top 20 ranks for ~73% of enzymes. Furthermore, benchmarking of prediction on comparative modelled structures showed that models result in better prediction than only sequence based predictions. These analyses suggest that CSmetaPred_poc is able to rank putative catalytic

  9. Predictive modeling and multi-objective optimization of machining-induced residual stresses: Investigation of machining parameter effects

    Science.gov (United States)

    Ulutan, Durul

    2013-01-01

    In the aerospace industry, titanium and nickel-based alloys are frequently used for critical structural components, especially due to their higher strength at both low and high temperatures, and higher wear and chemical degradation resistance. However, because of their unfavorable thermal properties, deformation and friction-induced microstructural changes prevent the end products from having good surface integrity properties. In addition to surface roughness, microhardness changes, and microstructural alterations, the machining-induced residual stress profiles of titanium and nickel-based alloys contribute in the surface integrity of these products. Therefore, it is essential to create a comprehensive method that predicts the residual stress outcomes of machining processes, and understand how machining parameters (cutting speed, uncut chip thickness, depth of cut, etc.) or tool parameters (tool rake angle, cutting edge radius, tool material/coating, etc.) affect the machining-induced residual stresses. Since experiments involve a certain amount of error in measurements, physics-based simulation experiments should also involve an uncertainty in the predicted values, and a rich set of simulation experiments are utilized to create expected value and variance for predictions. As the first part of this research, a method to determine the friction coefficients during machining from practical experiments was introduced. Using these friction coefficients, finite element-based simulation experiments were utilized to determine flow stress characteristics of materials and then to predict the machining-induced forces and residual stresses, and the results were validated using the experimental findings. A sensitivity analysis on the numerical parameters was conducted to understand the effect of changing physical and numerical parameters, increasing the confidence on the selected parameters, and the effect of machining parameters on machining-induced forces and residual

  10. Logging utilization research in the Pacific Northwest: residue prediction and unique research challenges

    Science.gov (United States)

    Erik C. Berg; Todd A. Morgan; Eric A. Simmons; Stanley J. Zarnoch

    2015-01-01

    Logging utilization research results have informed land managers of changes in utilization of forest growing stock for more than 40 years. The logging utilization residue ratio- growing stock residue volume/mill delivered volume- can be applied to historic or projected timber harvest volumes to predict woody residue volumes at varied spatial scales. Researchers at the...

  11. Prediction of amino acid residues protected from hydrogen-deuterium exchange in a protein chain.

    Science.gov (United States)

    Dovidchenko, N V; Lobanov, M Yu; Garbuzynskiy, S O; Galzitskaya, O V

    2009-08-01

    We have investigated the possibility to predict protection of amino acid residues from hydrogen-deuterium exchange. A database containing experimental hydrogen-deuterium exchange data for 14 proteins for which these data are known has been compiled. Different structural parameters related to flexibility of amino acid residues and their amide groups have been analyzed to answer the question whether these parameters can be used for predicting the protection of amino acid residues from hydrogen-deuterium exchange. A method for prediction of protection of amino acid residues, which uses only the amino acid sequence of a protein, has been elaborated.

  12. Prediction of residue-residue contact matrix for protein-protein interaction with Fisher score features and deep learning.

    Science.gov (United States)

    Du, Tianchuan; Liao, Li; Wu, Cathy H; Sun, Bilin

    2016-11-01

    Protein-protein interactions play essential roles in many biological processes. Acquiring knowledge of the residue-residue contact information of two interacting proteins is not only helpful in annotating functions for proteins, but also critical for structure-based drug design. The prediction of the protein residue-residue contact matrix of the interfacial regions is challenging. In this work, we introduced deep learning techniques (specifically, stacked autoencoders) to build deep neural network models to tackled the residue-residue contact prediction problem. In tandem with interaction profile Hidden Markov Models, which was used first to extract Fisher score features from protein sequences, stacked autoencoders were deployed to extract and learn hidden abstract features. The deep learning model showed significant improvement over the traditional machine learning model, Support Vector Machines (SVM), with the overall accuracy increased by 15% from 65.40% to 80.82%. We showed that the stacked autoencoders could extract novel features, which can be utilized by deep neural networks and other classifiers to enhance learning, out of the Fisher score features. It is further shown that deep neural networks have significant advantages over SVM in making use of the newly extracted features. Copyright © 2016. Published by Elsevier Inc.

  13. Estimation of variance components and prediction of breeding values in rubber tree breeding using the REML/BLUP procedure

    Directory of Open Access Journals (Sweden)

    Renata Capistrano Moreira Furlani

    2005-01-01

    Full Text Available The present paper deals with estimation of variance components, prediction of breeding values and selection in a population of rubber tree [Hevea brasiliensis (Willd. ex Adr. de Juss. Müell.-Arg.] from Rio Branco, State of Acre, Brazil. The REML/BLUP (restricted maximum likelihood/best linear unbiased prediction procedure was applied. For this purpose, 37 rubber tree families were obtained and assessed in a randomized complete block design, with three unbalanced replications. The field trial was carried out at the Experimental Station of UNESP, located in Selvíria, State of Mato Grosso do Sul, Brazil. The quantitative traits evaluated were: girth (G, bark thickness (BT, number of latex vessel rings (NR, and plant height (PH. Given the unbalanced condition of the progeny test, the REML/BLUP procedure was used for estimation. The narrow-sense individual heritability estimates were 0.43 for G, 0.18 for BT, 0.01 for NR, and 0.51 for PH. Two selection strategies were adopted: one short-term (ST - selection intensity of 8.85% and the other long-term (LT - selection intensity of 26.56%. For G, the estimated genetic gains in relation to the population average were 26.80% and 17.94%, respectively, according to the ST and LT strategies. The effective population sizes were 22.35 and 46.03, respectively. The LT and ST strategies maintained 45.80% and 28.24%, respectively, of the original genetic diversity represented in the progeny test. So, it can be inferred that this population has potential for both breeding and ex situ genetic conservation as a supplier of genetic material for advanced rubber tree breeding programs.

  14. Effect of sequence variants on variance in glucose levels predicts type 2 diabetes risk and accounts for heritability.

    Science.gov (United States)

    Ivarsdottir, Erna V; Steinthorsdottir, Valgerdur; Daneshpour, Maryam S; Thorleifsson, Gudmar; Sulem, Patrick; Holm, Hilma; Sigurdsson, Snaevar; Hreidarsson, Astradur B; Sigurdsson, Gunnar; Bjarnason, Ragnar; Thorsson, Arni V; Benediktsson, Rafn; Eyjolfsson, Gudmundur; Sigurdardottir, Olof; Olafsson, Isleifur; Zeinali, Sirous; Azizi, Fereidoun; Thorsteinsdottir, Unnur; Gudbjartsson, Daniel F; Stefansson, Kari

    2017-09-01

    Sequence variants that affect mean fasting glucose levels do not necessarily affect risk for type 2 diabetes (T2D). We assessed the effects of 36 reported glucose-associated sequence variants on between- and within-subject variance in fasting glucose levels in 69,142 Icelanders. The variant in TCF7L2 that increases fasting glucose levels increases between-subject variance (5.7% per allele, P = 4.2 × 10 -10 ), whereas variants in GCK and G6PC2 that increase fasting glucose levels decrease between-subject variance (7.5% per allele, P = 4.9 × 10 -11 and 7.3% per allele, P = 7.5 × 10 -18 , respectively). Variants that increase mean and between-subject variance in fasting glucose levels tend to increase T2D risk, whereas those that increase the mean but reduce variance do not (r 2 = 0.61). The variants that increase between-subject variance increase fasting glucose heritability estimates. Intuitively, our results show that increasing the mean and variance of glucose levels is more likely to cause pathologically high glucose levels than increase in the mean offset by a decrease in variance.

  15. Prediction of Active Site and Distal Residues in E. coli DNA Polymerase III alpha Polymerase Activity.

    Science.gov (United States)

    Parasuram, Ramya; Coulther, Timothy A; Hollander, Judith M; Keston-Smith, Elise; Ondrechen, Mary Jo; Beuning, Penny J

    2018-02-20

    The process of DNA replication is carried out with high efficiency and accuracy by DNA polymerases. The replicative polymerase in E. coli is DNA Pol III, which is a complex of 10 different subunits that coordinates simultaneous replication on the leading and lagging strands. The 1160-residue Pol III alpha subunit is responsible for the polymerase activity and copies DNA accurately, making one error per 10 5 nucleotide incorporations. The goal of this research is to determine the residues that contribute to the activity of the polymerase subunit. Homology modeling and the computational methods of THEMATICS and POOL were used to predict functionally important amino acid residues through their computed chemical properties. Site-directed mutagenesis and biochemical assays were used to validate these predictions. Primer extension, steady-state single-nucleotide incorporation kinetics, and thermal denaturation assays were performed to understand the contribution of these residues to the function of the polymerase. This work shows that the top 15 residues predicted by POOL, a set that includes the three previously known catalytic aspartate residues, seven remote residues, plus five previously unexplored first-layer residues, are important for function. Six previously unidentified residues, R362, D405, K553, Y686, E688, and H760, are each essential to Pol III activity; three additional residues, Y340, R390, and K758, play important roles in activity.

  16. Predictive models of forest logging residues of Triplochiton ...

    African Journals Online (AJOL)

    In this study, biomass yield residue was quantified and equations developed for Triplochiton scleroxylon, in secondary forests, Ondo State, Nigeria. Plotless sampling technique was used for the study. A total of 31 Triplochiton scleroxylon were randomly selected. Tree identification and detailed growing stock of outside bark ...

  17. Factors that predict residual tumors in re-TUR patients

    African Journals Online (AJOL)

    H. Türk

    2015-11-30

    Nov 30, 2015 ... sweeteners and bladder cancer in Manchester, U.K., and Nagoya, Japan. Br J Cancer 1982;45:332–6. [30] Klan R, Loy V, Huland H. Residual tumor discovered in routine sec- ond transurethral resection in patients with stage T1 transitional cell carcinoma of the bladder. J Urol 1991;146(2):316–8.

  18. Prediction of protein-protein binding site by using core interface residue and support vector machine

    Directory of Open Access Journals (Sweden)

    Sun Zhonghua

    2008-12-01

    Full Text Available Abstract Background The prediction of protein-protein binding site can provide structural annotation to the protein interaction data from proteomics studies. This is very important for the biological application of the protein interaction data that is increasing rapidly. Moreover, methods for predicting protein interaction sites can also provide crucial information for improving the speed and accuracy of protein docking methods. Results In this work, we describe a binding site prediction method by designing a new residue neighbour profile and by selecting only the core-interface residues for SVM training. The residue neighbour profile includes both the sequential and the spatial neighbour residues of an interface residue, which is a more complete description of the physical and chemical characteristics surrounding the interface residue. The concept of core interface is applied in selecting the interface residues for training the SVM models, which is shown to result in better discrimination between the core interface and other residues. The best SVM model trained was tested on a test set of 50 randomly selected proteins. The sensitivity, specificity, and MCC for the prediction of the core interface residues were 60.6%, 53.4%, and 0.243, respectively. Our prediction results on this test set were compared with other three binding site prediction methods and found to perform better. Furthermore, our method was tested on the 101 unbound proteins from the protein-protein interaction benchmark v2.0. The sensitivity, specificity, and MCC of this test were 57.5%, 32.5%, and 0.168, respectively. Conclusion By improving both the descriptions of the interface residues and their surrounding environment and the training strategy, better SVM models were obtained and shown to outperform previous methods. Our tests on the unbound protein structures suggest further improvement is possible.

  19. Non-"g" Residuals of the SAT and ACT Predict Specific Abilities

    Science.gov (United States)

    Coyle, Thomas R.; Purcell, Jason M.; Snyder, Anissa C.; Kochunov, Peter

    2013-01-01

    This research examined whether non-"g" residuals of the SAT and ACT subtests, obtained after removing g, predicted specific abilities. Non-"g" residuals of the verbal and math subtests of the SAT and ACT were correlated with academic (verbal and math) and non-academic abilities (speed and shop), both based on the Armed Services…

  20. Predicting protein-protein interface residues using local surface structural similarity

    Directory of Open Access Journals (Sweden)

    Jordan Rafael A

    2012-03-01

    Full Text Available Abstract Background Identification of the residues in protein-protein interaction sites has a significant impact in problems such as drug discovery. Motivated by the observation that the set of interface residues of a protein tend to be conserved even among remote structural homologs, we introduce PrISE, a family of local structural similarity-based computational methods for predicting protein-protein interface residues. Results We present a novel representation of the surface residues of a protein in the form of structural elements. Each structural element consists of a central residue and its surface neighbors. The PrISE family of interface prediction methods uses a representation of structural elements that captures the atomic composition and accessible surface area of the residues that make up each structural element. Each of the members of the PrISE methods identifies for each structural element in the query protein, a collection of similar structural elements in its repository of structural elements and weights them according to their similarity with the structural element of the query protein. PrISEL relies on the similarity between structural elements (i.e. local structural similarity. PrISEG relies on the similarity between protein surfaces (i.e. general structural similarity. PrISEC, combines local structural similarity and general structural similarity to predict interface residues. These predictors label the central residue of a structural element in a query protein as an interface residue if a weighted majority of the structural elements that are similar to it are interface residues, and as a non-interface residue otherwise. The results of our experiments using three representative benchmark datasets show that the PrISEC outperforms PrISEL and PrISEG; and that PrISEC is highly competitive with state-of-the-art structure-based methods for predicting protein-protein interface residues. Our comparison of PrISEC with PredUs, a recently

  1. Variance Components

    CERN Document Server

    Searle, Shayle R; McCulloch, Charles E

    1992-01-01

    WILEY-INTERSCIENCE PAPERBACK SERIES. The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. ". . .Variance Components is an excellent book. It is organized and well written, and provides many references to a variety of topics. I recommend it to anyone with interest in linear models.".

  2. Weld residual stress predictions in reactor vessel head penetrations

    Energy Technology Data Exchange (ETDEWEB)

    Suanno, Rodolfo L.M.; Ferrari, Lucio D.B. [ELETROBRAS Termonuclear S.A. - ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Stress Analysis Department], e-mail: rsuanno@eletronuclear.gov.br; Teughels, Anne; Malekian, Christian [Tractebel Engineering, Brussels (Belgium). Reliability Nuclear Department], e-mail: anne.teughels@tractebel.com

    2009-07-01

    The penetrations in the early Pressurized Water Reactors Vessels are characterized by Alloy 600 tubes, welded by Alloy 182/82. The Alloy 600 tubes have been shown to be susceptible to PWSCC (Primary Water Stress Corrosion Cracking) which may lead to crack forming. The cracking mechanism is driven mainly by the welding residual stress and, in a second place, by the operational stress in the weld region. It is therefore of big interest to quantify the weld residual stress field correctly. In this paper the weld residual stress field is calculated by finite elements using a sequentially coupled approach, which is well known in literature. It includes a transient thermal analysis simulating the heating during the multipass welding, followed by a transient thermo-mechanical analysis for the determination of the stresses involved with it. The welding consists of a sequence of weld beads, each of which is deposited in its entirety, at once, instead of gradually. Central as well as eccentric sidehill nozzles on the vessel head are analyzed in the paper. For the former a 2- dimensional axisymmetrical finite element model is used, whereas for the latter a 3-dimensional model is set up. Different positions on the vessel head are compared and the influence of the sidehill effect is illustrated. In the framework of a common project for Angra 1, Tractebel Engineering (Belgium) and ELETRONUCLEAR (Nuclear Utility, Brazil) had the opportunity to compare their analysis method, which they applied to the Belgian and the Brazilian nuclear reactors, respectively. The global approach in both cases is very similar but is applied to different configurations, specific for each plant. In the article the results of both cases are compared. (author)

  3. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    DEFF Research Database (Denmark)

    Su, Guosheng; Christensen, Ole Fredslund; Ostersen, Tage

    2012-01-01

    genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition...... (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrowsense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.......5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition...

  4. Pharmacokinetics, efficacy prediction indexes and residue depletion of antibacterial drugs.

    Directory of Open Access Journals (Sweden)

    Arturo Anadón

    2016-06-01

    Full Text Available Pharmacokinetics behaviour of the antibacterial in food producing animals, provides information on the rates of absorption and elimination, half-life in plasma and tissue, elimination pathways and metabolism. The dose and the dosing interval of the antimicrobial can be justified by considering the pharmacokinetic/pharmacodynamic (PK/PD relationship, if established, as well as the severity of the disease, whereas the number of administrations should be in line with the nature of the disease. The target population for therapy should be well defined and possible to identify under field conditions. Based on in vitro susceptibility data, and target animal PK data, an analysis for the PK/PD relationship may be used to support dose regimen selection and interpretation criteria for a clinical breakpoint. Therefore, for all antibacterials with systemic activity, the MIC data collected should be compared with the concentration of the compound at the relevant biophase following administration at the assumed therapeutic dose as recorded in the pharmacokinetic studies. Currently, the most frequently used parameters to express the PK/PD relationship are Cmax/MIC (maximum serum concentration/MIC, %T > MIC (fraction of time in which concentration exceeds MIC and AUC/MIC (area under the inhibitory concentration– time curve/MIC. Furthermore, the pharmacokinetic parameters provide the first indication of the potential for persistent residues and the tissues in which they may occur. The information on residue depletion in food-producing animals, provides the data on which MRL recommendations will be based. A critical factor in the antibacterial medication of all food-producing animals is the mandatory withdrawal period, defined as the time during which drug must not be administered prior to the slaughter of the animal for consumption. The withdrawal period is an integral part of the regulatory authorities’ approval process and is designed to ensure that no

  5. SDR: a database of predicted specificity-determining residues in proteins.

    Science.gov (United States)

    Donald, Jason E; Shakhnovich, Eugene I

    2009-01-01

    The specificity-determining residue database (SDR database) presents residue positions where mutations are predicted to have changed protein function in large protein families. Because the database pre-calculates predictions on existing protein sequence alignments, users can quickly find the predictions by selecting the appropriate protein family or searching by protein sequence. Predictions can be used to guide mutagenesis or to gain a better understanding of specificity changes in a protein family. The database is available on the web at http://paradox.harvard.edu/sdr.

  6. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  7. Anatomical Variance in Acetabular Anteversion Does Not Predict Hip Fracture Patterns in the Elderly: A Retrospective Study in 135 Patients

    OpenAIRE

    Kamath, Megan Y.; Coleman, Nathan W.; Belkoff, Stephen M.; Mears, Simon C.

    2011-01-01

    It has been suggested that variances in the anatomy of the acetabulum determine the type of hip fracture in elderly patients. Based on this concept, an overly anteverted acetabulum would lead to impingement of the femoral neck against the posterior rim of the acetabulum, causing a femoral neck fracture, whereas with a retroverted acetabulum, external rotation of the hip would be limited by the capsular tissues attached to the trochanteric region, causing a trochanteric fracture. To test the h...

  8. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  9. A Multifeatures Fusion and Discrete Firefly Optimization Method for Prediction of Protein Tyrosine Sulfation Residues.

    Science.gov (United States)

    Guo, Song; Liu, Chunhua; Zhou, Peng; Li, Yanling

    2016-01-01

    Tyrosine sulfation is one of the ubiquitous protein posttranslational modifications, where some sulfate groups are added to the tyrosine residues. It plays significant roles in various physiological processes in eukaryotic cells. To explore the molecular mechanism of tyrosine sulfation, one of the prerequisites is to correctly identify possible protein tyrosine sulfation residues. In this paper, a novel method was presented to predict protein tyrosine sulfation residues from primary sequences. By means of informative feature construction and elaborate feature selection and parameter optimization scheme, the proposed predictor achieved promising results and outperformed many other state-of-the-art predictors. Using the optimal features subset, the proposed method achieved mean MCC of 94.41% on the benchmark dataset, and a MCC of 90.09% on the independent dataset. The experimental performance indicated that our new proposed method could be effective in identifying the important protein posttranslational modifications and the feature selection scheme would be powerful in protein functional residues prediction research fields.

  10. Prediction of detailed enzyme functions and identification of specificity determining residues by random forests.

    Directory of Open Access Journals (Sweden)

    Chioko Nagao

    Full Text Available Determining enzyme functions is essential for a thorough understanding of cellular processes. Although many prediction methods have been developed, it remains a significant challenge to predict enzyme functions at the fourth-digit level of the Enzyme Commission numbers. Functional specificity of enzymes often changes drastically by mutations of a small number of residues and therefore, information about these critical residues can potentially help discriminate detailed functions. However, because these residues must be identified by mutagenesis experiments, the available information is limited, and the lack of experimentally verified specificity determining residues (SDRs has hindered the development of detailed function prediction methods and computational identification of SDRs. Here we present a novel method for predicting enzyme functions by random forests, EFPrf, along with a set of putative SDRs, the random forests derived SDRs (rf-SDRs. EFPrf consists of a set of binary predictors for enzymes in each CATH superfamily and the rf-SDRs are the residue positions corresponding to the most highly contributing attributes obtained from each predictor. EFPrf showed a precision of 0.98 and a recall of 0.89 in a cross-validated benchmark assessment. The rf-SDRs included many residues, whose importance for specificity had been validated experimentally. The analysis of the rf-SDRs revealed both a general tendency that functionally diverged superfamilies tend to include more active site residues in their rf-SDRs than in less diverged superfamilies, and superfamily-specific conservation patterns of each functional residue. EFPrf and the rf-SDRs will be an effective tool for annotating enzyme functions and for understanding how enzyme functions have diverged within each superfamily.

  11. Prediction of Detailed Enzyme Functions and Identification of Specificity Determining Residues by Random Forests

    Science.gov (United States)

    Nagao, Chioko; Nagano, Nozomi; Mizuguchi, Kenji

    2014-01-01

    Determining enzyme functions is essential for a thorough understanding of cellular processes. Although many prediction methods have been developed, it remains a significant challenge to predict enzyme functions at the fourth-digit level of the Enzyme Commission numbers. Functional specificity of enzymes often changes drastically by mutations of a small number of residues and therefore, information about these critical residues can potentially help discriminate detailed functions. However, because these residues must be identified by mutagenesis experiments, the available information is limited, and the lack of experimentally verified specificity determining residues (SDRs) has hindered the development of detailed function prediction methods and computational identification of SDRs. Here we present a novel method for predicting enzyme functions by random forests, EFPrf, along with a set of putative SDRs, the random forests derived SDRs (rf-SDRs). EFPrf consists of a set of binary predictors for enzymes in each CATH superfamily and the rf-SDRs are the residue positions corresponding to the most highly contributing attributes obtained from each predictor. EFPrf showed a precision of 0.98 and a recall of 0.89 in a cross-validated benchmark assessment. The rf-SDRs included many residues, whose importance for specificity had been validated experimentally. The analysis of the rf-SDRs revealed both a general tendency that functionally diverged superfamilies tend to include more active site residues in their rf-SDRs than in less diverged superfamilies, and superfamily-specific conservation patterns of each functional residue. EFPrf and the rf-SDRs will be an effective tool for annotating enzyme functions and for understanding how enzyme functions have diverged within each superfamily. PMID:24416252

  12. Prediction and Optimization of Residual Stresses on Machined Surface and Sub-Surface in MQL Turning

    Science.gov (United States)

    Ji, Xia; Zou, Pan; Li, Beizhi; Rajora, Manik; Shao, Yamin; Liang, Steven Y.

    Residual stress in the machined surface and subsurface is affected by materials, machining conditions, and tool geometry and can affect the component life and service quality significantly. Empirical or numerical experiments are commonly used for determining residual stresses but these are very expensive. There has been an increase in the utilization of minimum quantity lubrication (MQL) in recent years in order to reduce the cost and tool/part handling efforts, while its effect on machined part residual stress, although important, has not been explored. This paper presents a hybrid neural network that is trained using Simulated Annealing (SA) and Levenberg-Marquardt Algorithm (LM) in order to predict the values of residual stresses in cutting and radial direction on the surface and within the work piece after the MQL face turning process. Once the ANN has successfully been trained, an optimization procedure, using Genetic Algorithm (GA), is applied in order to find the best cutting conditions in order to minimize the surface tensile residual stresses and maximize the compressive residual stresses within the work piece. The optimization results show that the usage of MQL decreases the surface tensile residual stresses and increases the compressive residual stresses within the work piece.

  13. The prediction of the residual life of electromechanical equipment based on the artificial neural network

    Science.gov (United States)

    Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.

    2017-10-01

    This article is devoted to the prediction of the residual life based on an estimate of the technical state of the induction motor. The proposed system allows to increase the accuracy and completeness of diagnostics by using an artificial neural network (ANN), and also identify and predict faulty states of an electrical equipment in dynamics. The results of the proposed system for estimation the technical condition are probability technical state diagrams and a quantitative evaluation of the residual life, taking into account electrical, vibrational, indirect parameters and detected defects. Based on the evaluation of the technical condition and the prediction of the residual life, a decision is made to change the control of the operating and maintenance modes of the electric motors.

  14. Prediction of Welding Deformation and Residual Stress of Stiffened Plates Based on Experiments

    Science.gov (United States)

    Bai, R. X.; Guo, Z. F.; Lei, Z. K.

    2017-12-01

    Thermo-elastic-plastic (TEP) method is a method that can accurately predict welding deformation and residual stresses, but the premise is to select the appropriate heat source parameters. Aiming at the two welded joints in the stiffened plate studied in this paper, the welding experiments of simple components were carried out respectively, and the corresponding welding deformation and residual stresses were measured. Based on the welding experiment, the corresponding TEP model was established, and the corresponding heat source parameters were obtained according to the experimental data. The comparison between the experimental results and the numerical results shows that the obtained heat source parameters can well predict the welding deformation and residual stress of the welded structure. And then, the obtained heat source parameters were applied to the TEP model of the stiffened plate. The prediction results show that the T-type fillet welds of the stiffened plate can reduce the angular deformation caused by the butt welds to a certain extent. In addition, we can also find that the heat of the subsequent welds can reduce the residual stresses at the completed welds. This method not only can save a lot of experimental costs and time, but also can accurately predict the welding deformation and residual stresses.

  15. InterMap3D: predicting and visualizing co-evolving protein residues

    DEFF Research Database (Denmark)

    Oliveira, Rodrigo Gouveia; Roque, francisco jose sousa simôes almeida; Wernersson, Rasmus

    2009-01-01

    InterMap3D predicts co-evolving protein residues and plots them on the 3D protein structure. Starting with a single protein sequence, InterMap3D automatically finds a set of homologous sequences, generates an alignment and fetches the most similar 3D structure from the Protein Data Bank (PDB......). It can also accept a user-generated alignment. Based on the alignment, co-evolving residues are then predicted using three different methods: Row and Column Weighing of Mutual Information, Mutual Information/Entropy and Dependency. Finally, InterMap3D generates high-quality images of the protein...

  16. Allometric scaling of population variance with mean body size is predicted from Taylor’s law and density-mass allometry

    Science.gov (United States)

    Cohen, Joel E.; Xu, Meng; Schuster, William S. F.

    2012-01-01

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor’s law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density–mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship “variance–mass allometry” (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals. PMID:23019367

  17. A simulation methodology of spacer grid residual spring deflection for predictive and interpretative purposes

    International Nuclear Information System (INIS)

    Kim, K. T.; Kim, H. K.; Yoon, K. H.

    1994-01-01

    The in-reactor fuel rod support conditions against the fretting wear-induced damage can be evaluated by spacer grid residual spring deflection. In order to predict the spacer grid residual spring deflection as a function of burnup for various spring designs, a simulation methodology of spacer grid residual spring deflection has been developed and implemented in the GRIDFORCE program. The simulation methodology takes into account cladding creep rate, initial spring deflection, initial spring force, and spring force relaxation rate as the key parameters affecting the residual spring deflection. The simulation methodology developed in this study can be utilized as an effective tool in evaluating the capability of a newly designed spacer grid spring to prevent the fretting wear-induced damage

  18. PiRaNhA: A server for the computational prediction of RNA-binding residues in protein sequences

    OpenAIRE

    Murakami, Yoichi; Spriggs, Ruth V; Nakamura, Haruki; Jones, Susan

    2010-01-01

    The PiRaNhA web server is a publicly available online resource that automatically predicts the location of RNA-binding residues (RBRs) in protein sequences. The goal of functional annotation of sequences in the field of RNA binding is to provide predictions of high accuracy that require only small numbers of targeted mutations for verification. The PiRaNhA server uses a support vector machine (SVM), with position-specific scoring matrices, residue interface propensity, predicted residue acces...

  19. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  20. Prediction method of seismic residual deformation of caisson quay wall in liquefied foundation

    Science.gov (United States)

    Wang, Li-Yan; Liu, Han-Long; Jiang, Peng-Ming; Chen, Xiang-Xiang

    2011-03-01

    The multi-spring shear mechanism plastic model in this paper is defined in strain space to simulate pore pressure generation and development in sands under cyclic loading and undrained conditions, and the rotation of principal stresses can also be simulated by the model with cyclic behavior of anisotropic consolidated sands. Seismic residual deformations of typical caisson quay walls under different engineering situations are analyzed in detail by the plastic model, and then an index of liquefaction extent is applied to describe the regularity of seismic residual deformation of caisson quay wall top under different engineering situations. Some correlated prediction formulas are derived from the results of regression analysis between seismic residual deformation of quay wall top and extent of liquefaction in the relative safety backfill sand site. Finally, the rationality and the reliability of the prediction methods are validated by test results of a 120 g-centrifuge shaking table, and the comparisons show that some reliable seismic residual deformation of caisson quay can be predicted by appropriate prediction formulas and appropriate index of liquefaction extent.

  1. Demographic factors and hospital size predict patient satisfaction variance--implications for hospital value-based purchasing.

    Science.gov (United States)

    McFarland, Daniel C; Ornstein, Katherine A; Holcombe, Randall F

    2015-08-01

    Hospital Value-Based Purchasing (HVBP) incentivizes quality performance-based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population-dense areas and could bias Centers for Medicare & Medicaid Services (CMS) reimbursement. Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized 4 highly predictive variables, and hospitals were reranked accordingly. A total of 3907 HVBP-participating hospitals. There were 934,800 patient surveys by the most conservative estimate. A total of 3144 county demographics (US Census) and HCAHPS surveys. Hospital size and primary language (non-English speaking) most strongly predicted unfavorable HCAHPS scores, whereas education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals' locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Demographic and structural factors (eg, hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. © 2015 Society of Hospital Medicine.

  2. PAIRpred: partner-specific prediction of interacting residues from sequence and structure.

    Science.gov (United States)

    Minhas, Fayyaz ul Amir Afsar; Geiss, Brian J; Ben-Hur, Asa

    2014-07-01

    We present a novel partner-specific protein-protein interaction site prediction method called PAIRpred. Unlike most existing machine learning binding site prediction methods, PAIRpred uses information from both proteins in a protein complex to predict pairs of interacting residues from the two proteins. PAIRpred captures sequence and structure information about residue pairs through pairwise kernels that are used for training a support vector machine classifier. As a result, PAIRpred presents a more detailed model of protein binding, and offers state of the art accuracy in predicting binding sites at the protein level as well as inter-protein residue contacts at the complex level. We demonstrate PAIRpred's performance on Docking Benchmark 4.0 and recent CAPRI targets. We present a detailed performance analysis outlining the contribution of different sequence and structure features, together with a comparison to a variety of existing interface prediction techniques. We have also studied the impact of binding-associated conformational change on prediction accuracy and found PAIRpred to be more robust to such structural changes than existing schemes. As an illustration of the potential applications of PAIRpred, we provide a case study in which PAIRpred is used to analyze the nature and specificity of the interface in the interaction of human ISG15 protein with NS1 protein from influenza A virus. Python code for PAIRpred is available at http://combi.cs.colostate.edu/supplements/pairpred/. © 2013 Wiley Periodicals, Inc.

  3. Estruturas de variância residual para estimação de funções de covariância para o peso de bovinos da raça Canchim Residual variance structures to estimate covariance functions for weight of Canchim beef cattle

    Directory of Open Access Journals (Sweden)

    Fábio Luiz Buranelo Toral

    2009-11-01

    Full Text Available Este trabalho foi realizado com o objetivo de avaliar a utilização de diferentes estruturas de variância residual para estimação de funções de covariância para o peso de bovinos da raça Canchim. As funções de covariância foram estimadas pelo método da Máxima Verossimilhança Restrita em um modelo animal com os efeitos fixos de grupo de contemporâneos (ano e mês de nascimento e sexo, idade da vaca ao parto como covariável (efeitos linear e quadrático e da trajetória média de crescimento, enquanto os efeitos aleatórios considerados foram os efeitos genéticos aditivos direto e materno, de ambiente permanente individual e materno e residual. Foram utilizadas diversas estruturas para a variância residual: funções de variâncias de ordem linear até quíntica e 1, 5, 10, 15 ou 20 classes de idades. A utilização de variância residual homogênea não foi adequada. A utilização da função de variância residual quártica e a divisão da variância residual em 20 classes proporcionaram os melhores ajustes, e a divisão em classes foi mais eficiente que a utilização de funções. As estimativas de herdabilidade direta se situaram entre 0,16 e 0,25 na maioria das idades consideradas e as maiores estimativas foram obtidas próximo aos 360 dias de idade e no final do período estudado. Em geral, as estimativas de herdabilidade direta foram semelhantes para os modelos com variância residual homogênea, função de variância residual quártica ou com 20 classes de idade. A melhor descrição das variâncias residuais para o peso em diversas idades de bovinos da raça Canchim foi a que considerou 20 classes heterogêneas. Entretanto, como existem classes com variâncias semelhantes, é possível agrupar algumas delas e reduzir o número de parâmetros estimados.This study was carried out to evaluate the use of different residual variance structures to estimate covariance functions for weight of Canchim beef cattle. The

  4. The finite element analysis for prediction of residual stresses induced by shot peening

    International Nuclear Information System (INIS)

    Kim, Cheol; Yang, Won Ho; Sung, Ki Deug; Cho, Myoung Rae; Ko, Myung Hoon

    2000-01-01

    The shot peening is largely used for a surface treatment in which small spherical parts called shots are blasted on a surface of a metallic components with velocities up to 100m/s. This treatment leads to an improvement of fatigue behavior due to the developed compressive residual stresses, and so it has gained widespread acceptance in the automobile and aerospace industries. The residual stress profile on surface layer depends on the parameters of shot peening, which are, shot velocity, shot diameter, coverage, impact angle, material properties etc. and the method to confirm this profile is only measurement by X-ray diffractometer. Despite its importance to automobile and aerospace industries, little attention has been devoted to the accurate modeling of the process. In this paper, the simulation technique is applied to predict the magnitude and distribution of the residual stress and plastic deformation caused by shot peening with the help of the finite element analysis

  5. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    Science.gov (United States)

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  6. Finite Element Simulation of Shot Peening: Prediction of Residual Stresses and Surface Roughness

    Science.gov (United States)

    Gariépy, Alexandre; Perron, Claude; Bocher, Philippe; Lévesque, Martin

    Shot peening is a surface treatment that consists of bombarding a ductile surface with numerous small and hard particles. Each impact creates localized plastic strains that permanently stretch the surface. Since the underlying material constrains this stretching, compressive residual stresses are generated near the surface. This process is commonly used in the automotive and aerospace industries to improve fatigue life. Finite element analyses can be used to predict residual stress profiles and surface roughness created by shot peening. This study investigates further the parameters and capabilities of a random impact model by evaluating the representative volume element and the calculated stress distribution. Using an isotropic-kinematic hardening constitutive law to describe the behaviour of AA2024-T351 aluminium alloy, promising results were achieved in terms of residual stresses.

  7. RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Shmelev Gennadiy Dmitrievich

    2017-11-01

    Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.

  8. Prediction of welding residual distortions of large structures using a local/global approach

    International Nuclear Information System (INIS)

    Duan, Y. G.; Bergheau, J. M.; Vincent, Y.; Boitour, F.; Leblond, J. B.

    2007-01-01

    Prediction of welding residual distortions is more difficult than that of the microstructure and residual stresses. On the one hand, a fine mesh (often 3D) has to be used in the heat affected zone for the sake of the sharp variations of thermal, metallurgical and mechanical fields in this region. On the other hand, the whole structure is required to be meshed for the calculation of residual distortions. But for large structures, a 3D mesh is inconceivable caused by the costs of the calculation. Numerous methods have been developed to reduce the size of models. A local/global approach has been proposed to determine the welding residual distortions of large structures. The plastic strains and the microstructure due to welding are supposed can be determined from a local 3D model which concerns only the weld and its vicinity. They are projected as initial strains into a global 3D model which consists of the whole structure and obviously much less fine in the welded zone than the local model. The residual distortions are then calculated using a simple elastic analysis, which makes this method particularly effective in an industrial context. The aim of this article is to present the principle of the local/global approach then show the capacity of this method in an industrial context and finally study the definition of the local model

  9. An analytical model to predict and minimize the residual stress of laser cladding process

    Science.gov (United States)

    Tamanna, N.; Crouch, R.; Kabir, I. R.; Naher, S.

    2018-02-01

    Laser cladding is one of the advanced thermal techniques used to repair or modify the surface properties of high-value components such as tools, military and aerospace parts. Unfortunately, tensile residual stresses generate in the thermally treated area of this process. This work focuses on to investigate the key factors for the formation of tensile residual stress and how to minimize it in the clad when using dissimilar substrate and clad materials. To predict the tensile residual stress, a one-dimensional analytical model has been adopted. Four cladding materials (Al2O3, TiC, TiO2, ZrO2) on the H13 tool steel substrate and a range of preheating temperatures of the substrate, from 300 to 1200 K, have been investigated. Thermal strain and Young's modulus are found to be the key factors of formation of tensile residual stresses. Additionally, it is found that using a preheating temperature of the substrate immediately before laser cladding showed the reduction of residual stress.

  10. Catalytic residues and a predicted structure of tetrahydrobiopterin-dependent alkylglycerol mono-oxygenase

    Science.gov (United States)

    Watschinger, Katrin; Fuchs, Julian E.; Yarov-Yarovoy, Vladimir; Keller, Markus A.; Golderer, Georg; Hermetter, Albin; Werner-Felmayer, Gabriele; Hulo, Nicolas; Werner, Ernst R.

    2012-01-01

    Alkylglycerol mono-oxygenase (EC 1.14.16.5) forms a third, distinct, class among tetrahydrobiopterin-dependent enzymes in addition to aromatic amino acid hydroxylases and nitric oxide synthases. Its protein sequence contains the fatty acid hydroxylase motif, a signature indicative of a di-iron centre, which contains eight conserved histidine residues. Membrane enzymes containing this motif, including alkylglycerol mono-oxygenase, are especially labile and so far have not been purified to homogeneity in active form. To obtain a first insight into structure–function relationships of this enzyme, we performed site-directed mutagenesis of 26 selected amino acid residues and expressed wild-type and mutant proteins containing a C-terminal Myc tag together with fatty aldehyde dehydrogenase in Chinese-hamster ovary cells. Among all of the acidic residues within the eight-histidine motif, only mutation of Glu137 to alanine led to an 18-fold increase in the Michaelis–Menten constant for tetrahydrobiopterin, suggesting a role in tetrahydrobiopterin interaction. A ninth additional histidine residue essential for activity was also identified. Nine membrane domains were predicted by four programs: ESKW, TMHMM, MEMSAT and Phobius. Prediction of a part of the structure using the Rosetta membrane ab initio method led to a plausible suggestion for a structure of the catalytic site of alkylglycerol mono-oxygenase. PMID:22220568

  11. Assessment of Protein Side-Chain Conformation Prediction Methods in Different Residue Environments

    Science.gov (United States)

    Peterson, Lenna X.; Kang, Xuejiao; Kihara, Daisuke

    2016-01-01

    Computational prediction of side-chain conformation is an important component of protein structure prediction. Accurate side-chain prediction is crucial for practical applications of protein structure models that need atomic detailed resolution such as protein and ligand design. We evaluated the accuracy of eight side-chain prediction methods in reproducing the side-chain conformations of experimentally solved structures deposited to the Protein Data Bank. Prediction accuracy was evaluated for a total of four different structural environments (buried, surface, interface, and membrane-spanning) in three different protein types (monomeric, multimeric, and membrane). Overall, the highest accuracy was observed for buried residues in monomeric and multimeric proteins. Notably, side-chains at protein interfaces and membrane-spanning regions were better predicted than surface residues even though the methods did not all use multimeric and membrane proteins for training. Thus, we conclude that the current methods are as practically useful for modeling protein docking interfaces and membrane-spanning regions as for modeling monomers. PMID:24619909

  12. Predicting DNA-binding proteins and binding residues by complex structure prediction and application to human proteome.

    Directory of Open Access Journals (Sweden)

    Huiying Zhao

    Full Text Available As more and more protein sequences are uncovered from increasingly inexpensive sequencing techniques, an urgent task is to find their functions. This work presents a highly reliable computational technique for predicting DNA-binding function at the level of protein-DNA complex structures, rather than low-resolution two-state prediction of DNA-binding as most existing techniques do. The method first predicts protein-DNA complex structure by utilizing the template-based structure prediction technique HHblits, followed by binding affinity prediction based on a knowledge-based energy function (Distance-scaled finite ideal-gas reference state for protein-DNA interactions. A leave-one-out cross validation of the method based on 179 DNA-binding and 3797 non-binding protein domains achieves a Matthews correlation coefficient (MCC of 0.77 with high precision (94% and high sensitivity (65%. We further found 51% sensitivity for 82 newly determined structures of DNA-binding proteins and 56% sensitivity for the human proteome. In addition, the method provides a reasonably accurate prediction of DNA-binding residues in proteins based on predicted DNA-binding complex structures. Its application to human proteome leads to more than 300 novel DNA-binding proteins; some of these predicted structures were validated by known structures of homologous proteins in APO forms. The method [SPOT-Seq (DNA] is available as an on-line server at http://sparks-lab.org.

  13. Reliability residual-life prediction method for thermal aging based on performance degradation

    International Nuclear Information System (INIS)

    Ren Shuhong; Xue Fei; Yu Weiwei; Ti Wenxin; Liu Xiaotian

    2013-01-01

    The paper makes the study of the nuclear power plant main pipeline. The residual-life of the main pipeline that failed due to thermal aging has been studied by the use of performance degradation theory and Bayesian updating methods. Firstly, the thermal aging impact property degradation process of the main pipeline austenitic stainless steel has been analyzed by the accelerated thermal aging test data. Then, the thermal aging residual-life prediction model based on the impact property degradation data is built by Bayesian updating methods. Finally, these models are applied in practical situations. It is shown that the proposed methods are feasible and the prediction accuracy meets the needs of the project. Also, it provides a foundation for the scientific management of aging management of the main pipeline. (authors)

  14. Disentangling evolutionary signals: conservation, specificity determining positions and coevolution. Implication for catalytic residue prediction

    DEFF Research Database (Denmark)

    Teppa, Elin; Wilkins, Angela D.; Nielsen, Morten

    2012-01-01

    within a multiple sequence alignment to investigate their predictive potential and degree of overlap. Results: Our results demonstrate that the different methods included in the benchmark in general can be divided into three groups with a limited mutual overlap. One group containing real-value...... Evolutionary Trace (rvET) methods and conservation, another containing mutual information (MI) methods, and the last containing methods designed explicitly for the identification of specificity determining positions (SDPs): integer-value Evolutionary Trace (ivET), SDPfox, and XDET. In terms of prediction of CR......, we find using a proximity score integrating structural information (as the sum of the scores of residues located within a given distance of the residue in question) that only the methods from the first two groups displayed a reliable performance. Next, we investigated to what degree proximity scores...

  15. Variance-corrected Michaelis-Menten equation predicts transient rates of single-enzyme reactions and response times in bacterial gene-regulation

    Science.gov (United States)

    Pulkkinen, Otto; Metzler, Ralf

    2015-12-01

    Many chemical reactions in biological cells occur at very low concentrations of constituent molecules. Thus, transcriptional gene-regulation is often controlled by poorly expressed transcription-factors, such as E.coli lac repressor with few tens of copies. Here we study the effects of inherent concentration fluctuations of substrate-molecules on the seminal Michaelis-Menten scheme of biochemical reactions. We present a universal correction to the Michaelis-Menten equation for the reaction-rates. The relevance and validity of this correction for enzymatic reactions and intracellular gene-regulation is demonstrated. Our analytical theory and simulation results confirm that the proposed variance-corrected Michaelis-Menten equation predicts the rate of reactions with remarkable accuracy even in the presence of large non-equilibrium concentration fluctuations. The major advantage of our approach is that it involves only the mean and variance of the substrate-molecule concentration. Our theory is therefore accessible to experiments and not specific to the exact source of the concentration fluctuations.

  16. Predicting the concentration of residual methanol in industrial formalin using machine learning

    OpenAIRE

    Heidkamp, William

    2016-01-01

    In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to pr...

  17. High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging

    Science.gov (United States)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Subbotina, I. E.; Shichkin, A. V.; Sergeeva, M. V.; Lvova, O. A.

    2017-06-01

    The work deals with the application of neural networks residual kriging (NNRK) to the spatial prediction of the abnormally distributed soil pollutant (Cr). It is known that combination of geostatistical interpolation approaches (kriging) and neural networks leads to significantly better prediction accuracy and productivity. Generalized regression neural networks and multilayer perceptrons are classes of neural networks widely used for the continuous function mapping. Each network has its own pros and cons; however both demonstrated fast training and good mapping possibilities. In the work, we examined and compared two combined techniques: generalized regression neural network residual kriging (GRNNRK) and multilayer perceptron residual kriging (MLPRK). The case study is based on the real data sets on surface contamination by chromium at a particular location of the subarctic Novy Urengoy, Russia, obtained during the previously conducted screening. The proposed models have been built, implemented and validated using ArcGIS and MATLAB environments. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. MLRPK showed the best predictive accuracy comparing to the geostatistical approach (kriging) and even to GRNNRK.

  18. Tuning multiple imputation by predictive mean matching and local residual draws.

    Science.gov (United States)

    Morris, Tim P; White, Ian R; Royston, Patrick

    2014-06-05

    Multiple imputation is a commonly used method for handling incomplete covariates as it can provide valid inference when data are missing at random. This depends on being able to correctly specify the parametric model used to impute missing values, which may be difficult in many realistic settings. Imputation by predictive mean matching (PMM) borrows an observed value from a donor with a similar predictive mean; imputation by local residual draws (LRD) instead borrows the donor's residual. Both methods relax some assumptions of parametric imputation, promising greater robustness when the imputation model is misspecified. We review development of PMM and LRD and outline the various forms available, and aim to clarify some choices about how and when they should be used. We compare performance to fully parametric imputation in simulation studies, first when the imputation model is correctly specified and then when it is misspecified. In using PMM or LRD we strongly caution against using a single donor, the default value in some implementations, and instead advocate sampling from a pool of around 10 donors. We also clarify which matching metric is best. Among the current MI software there are several poor implementations. PMM and LRD may have a role for imputing covariates (i) which are not strongly associated with outcome, and (ii) when the imputation model is thought to be slightly but not grossly misspecified. Researchers should spend efforts on specifying the imputation model correctly, rather than expecting predictive mean matching or local residual draws to do the work.

  19. Protein structure prediction using residue- and fragment-environment potentials in CASP11.

    Science.gov (United States)

    Kim, Hyungrae; Kihara, Daisuke

    2016-09-01

    An accurate scoring function that can select near-native structure models from a pool of alternative models is key for successful protein structure prediction. For the critical assessment of techniques for protein structure prediction (CASP) 11, we have built a protocol of protein structure prediction that has novel coarse-grained scoring functions for selecting decoys as the heart of its pipeline. The score named PRESCO (Protein Residue Environment SCOre) developed recently by our group evaluates the native-likeness of local structural environment of residues in a structure decoy considering positions and the depth of side-chains of spatially neighboring residues. We also introduced a helix interaction potential as an additional scoring function for selecting decoys. The best models selected by PRESCO and the helix interaction potential underwent structure refinement, which includes side-chain modeling and relaxation with a short molecular dynamics simulation. Our protocol was successful, achieving the top rank in the free modeling category with a significant margin of the accumulated Z-score to the subsequent groups when the top 1 models were considered. Proteins 2016; 84(Suppl 1):105-117. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process

    Science.gov (United States)

    Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.

    2017-12-01

    Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.

  1. Adaptive Kalman filter based on variance component estimation for the prediction of ionospheric delay in aiding the cycle slip repair of GNSS triple-frequency signals

    Science.gov (United States)

    Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin

    2018-01-01

    In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.

  2. PINGU: PredIction of eNzyme catalytic residues usinG seqUence information.

    Directory of Open Access Journals (Sweden)

    Priyadarshini P Pai

    Full Text Available Identification of catalytic residues can help unveil interesting attributes of enzyme function for various therapeutic and industrial applications. Based on their biochemical roles, the number of catalytic residues and sequence lengths of enzymes vary. This article describes a prediction approach (PINGU for such a scenario. It uses models trained using physicochemical properties and evolutionary information of 650 non-redundant enzymes (2136 catalytic residues in a support vector machines architecture. Independent testing on 200 non-redundant enzymes (683 catalytic residues in predefined prediction settings, i.e., with non-catalytic per catalytic residue ranging from 1 to 30, suggested that the prediction approach was highly sensitive and specific, i.e., 80% or above, over the incremental challenges. To learn more about the discriminatory power of PINGU in real scenarios, where the prediction challenge is variable and susceptible to high false positives, the best model from independent testing was used on 60 diverse enzymes. Results suggested that PINGU was able to identify most catalytic residues and non-catalytic residues properly with 80% or above accuracy, sensitivity and specificity. The effect of false positives on precision was addressed in this study by application of predicted ligand-binding residue information as a post-processing filter. An overall improvement of 20% in F-measure and 0.138 in Correlation Coefficient with 16% enhanced precision could be achieved. On account of its encouraging performance, PINGU is hoped to have eventual applications in boosting enzyme engineering and novel drug discovery.

  3. Residue propensities, discrimination and binding site prediction of adenine and guanine phosphates

    Directory of Open Access Journals (Sweden)

    Ahmad Zulfiqar

    2011-05-01

    Full Text Available Abstract Background Adenine and guanine phosphates are involved in a number of biological processes such as cell signaling, metabolism and enzymatic cofactor functions. Binding sites in proteins for these ligands are often detected by looking for a previously known motif by alignment based search. This is likely to miss those where a similar binding site has not been previously characterized and when the binding sites do not follow the rule described by predefined motif. Also, it is intriguing how proteins select between adenine and guanine derivative with high specificity. Results Residue preferences for AMP, GMP, ADP, GDP, ATP and GTP have been investigated in details with additional comparison with cyclic variants cAMP and cGMP. We also attempt to predict residues interacting with these nucleotides using information derived from local sequence and evolutionary profiles. Results indicate that subtle differences exist between single residue preferences for specific nucleotides and taking neighbor environment and evolutionary context into account, successful models of their binding site prediction can be developed. Conclusion In this work, we explore how single amino acid propensities for these nucleotides play a role in the affinity and specificity of this set of nucleotides. This is expected to be helpful in identifying novel binding sites for adenine and guanine phosphates, especially when a known binding motif is not detectable.

  4. Prediction of residual stress distributions due to surface machining and welding and crack growth simulation under residual stress distribution

    International Nuclear Information System (INIS)

    Ihara, Ryohei; Katsuyama, JInya; Onizawa, Kunio; Hashimoto, Tadafumi; Mikami, Yoshiki; Mochizuki, Masahito

    2011-01-01

    Research highlights: → Residual stress distributions due to welding and machining are evaluated by XRD and FEM. → Residual stress due to machining shows higher tensile stress than welding near the surface. → Crack growth analysis is performed using calculated residual stress. → Crack growth result is affected machining rather than welding. → Machining is an important factor for crack growth. - Abstract: In nuclear power plants, stress corrosion cracking (SCC) has been observed near the weld zone of the core shroud and primary loop recirculation (PLR) pipes made of low-carbon austenitic stainless steel Type 316L. The joining process of pipes usually includes surface machining and welding. Both processes induce residual stresses, and residual stresses are thus important factors in the occurrence and propagation of SCC. In this study, the finite element method (FEM) was used to estimate residual stress distributions generated by butt welding and surface machining. The thermoelastic-plastic analysis was performed for the welding simulation, and the thermo-mechanical coupled analysis based on the Johnson-Cook material model was performed for the surface machining simulation. In addition, a crack growth analysis based on the stress intensity factor (SIF) calculation was performed using the calculated residual stress distributions that are generated by welding and surface machining. The surface machining analysis showed that tensile residual stress due to surface machining only exists approximately 0.2 mm from the machined surface, and the surface residual stress increases with cutting speed. The crack growth analysis showed that the crack depth is affected by both surface machining and welding, and the crack length is more affected by surface machining than by welding.

  5. Computational prediction of methylation types of covalently modified lysine and arginine residues in proteins.

    Science.gov (United States)

    Deng, Wankun; Wang, Yongbo; Ma, Lili; Zhang, Ying; Ullah, Shahid; Xue, Yu

    2017-07-01

    Protein methylation is an essential posttranslational modification (PTM) mostly occurs at lysine and arginine residues, and regulates a variety of cellular processes. Owing to the rapid progresses in the large-scale identification of methylation sites, the available data set was dramatically expanded, and more attention has been paid on the identification of specific methylation types of modification residues. Here, we briefly summarized the current progresses in computational prediction of methylation sites, which provided an accurate, rapid and efficient approach in contrast with labor-intensive experiments. We collected 5421 methyllysines and methylarginines in 2592 proteins from the literature, and classified most of the sites into different types. Data analyses demonstrated that different types of methylated proteins were preferentially involved in different biological processes and pathways, whereas a unique sequence preference was observed for each type of methylation sites. Thus, we developed a predictor of GPS-MSP, which can predict mono-, di- and tri-methylation types for specific lysines, and mono-, symmetric di- and asymmetrical di-methylation types for specific arginines. We critically evaluated the performance of GPS-MSP, and compared it with other existing tools. The satisfying results exhibited that the classification of methylation sites into different types for training can considerably improve the prediction accuracy. Taken together, we anticipate that our study provides a new lead for future computational analysis of protein methylation, and the prediction of methylation types of covalently modified lysine and arginine residues can generate more useful information for further experimental manipulation. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Directory of Open Access Journals (Sweden)

    Xiaoxia Yang

    Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  7. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    Directory of Open Access Journals (Sweden)

    Yasser El-Manzalawy

    Full Text Available A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles. Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein

  8. Impacts of both reference population size and inclusion of a residual polygenic effect on the accuracy of genomic prediction

    Directory of Open Access Journals (Sweden)

    Rensing Stephan

    2011-05-01

    Full Text Available Abstract Background The purpose of this work was to study the impact of both the size of genomic reference populations and the inclusion of a residual polygenic effect on dairy cattle genetic evaluations enhanced with genomic information. Methods Direct genomic values were estimated for German Holstein cattle with a genomic BLUP model including a residual polygenic effect. A total of 17,429 genotyped Holstein bulls were evaluated using the phenotypes of 44 traits. The Interbull genomic validation test was implemented to investigate how the inclusion of a residual polygenic effect impacted genomic estimated breeding values. Results As the number of reference bulls increased, both the variance of the estimates of single nucleotide polymorphism effects and the reliability of the direct genomic values of selection candidates increased. Fitting a residual polygenic effect in the model resulted in less biased genome-enhanced breeding values and decreased the correlation between direct genomic values and estimated breeding values of sires in the reference population. Conclusions Genetic evaluation of dairy cattle enhanced with genomic information is highly effective in increasing reliability, as well as using large genomic reference populations. We found that fitting a residual polygenic effect reduced the bias in genome-enhanced breeding values, decreased the correlation between direct genomic values and sire's estimated breeding values and made genome-enhanced breeding values more consistent in mean and variance as is the case for pedigree-based estimated breeding values.

  9. Current challenges in glioblastoma: intratumour heterogeneity, residual disease and models to predict disease recurrence

    Directory of Open Access Journals (Sweden)

    Hayley Patricia Ellis

    2015-11-01

    Full Text Available Glioblastoma (GB is the most common malignant primary brain tumour, and despite the availability of chemotherapy and radiotherapy to combat the disease, overall survival remains low with a high incidence of tumour recurrence. Technological advances are continually improving our understanding of the disease and in particular our knowledge of clonal evolution, intratumour heterogeneity and possible reservoirs of residual disease. These may inform how we approach clinical treatment and recurrence in GB. Mathematical modelling (including neural networks, and strategies such as multiple-sampling during tumour resection and genetic analysis of circulating cancer cells, may be of great future benefit to help predict the nature of residual disease and resistance to standard and molecular therapies in GB.

  10. Predicting dihedral angle probability distributions for protein coil residues from primary sequence using neural networks

    DEFF Research Database (Denmark)

    Helles, Glennie; Fonseca, Rasmus

    2009-01-01

    done previously, none have, to our knowledge, presented comparable results for the probability distribution of dihedral angles. Results: In this paper we develop an artificial neural network that uses an input-window of amino acids to predict a dihedral angle probability distribution for the middle...... residue in the input-window. The trained neural network shows a significant improvement (4-68%) in predicting the most probable bin (covering a 30°×30° area of the dihedral angle space) for all amino acids in the data set compared to first order statistics. An accuracy comparable to that of secondary......Predicting the three-dimensional structure of a protein from its amino acid sequence is currently one of the most challenging problems in bioinformatics. The internal structure of helices and sheets is highly recurrent and help reduce the search space significantly. However, random coil segments...

  11. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  12. Prediction of residual valvular lesions in rheumatic heart disease: role of adhesion molecules.

    Science.gov (United States)

    Hafez, Mona; Yahia, Sohier; Eldars, Waleed; Eldegla, Heba; Matter, Mohamed; Attia, Gehan; Hawas, Samia

    2013-03-01

    Rheumatic heart disease (RHD) is a chronic condition characterized by fibrosis and scarring of the cardiac valves and damage to the heart muscle, leading to congestive heart failure and death. This prospective cohort study was conducted to investigate the possible relation between the levels of serum adhesion molecules and acute rheumatic fever (ARF) carditis, valvular insult severity, and residual valvular lesion after improvement of rheumatic activity. Serum levels of intercellular adhesion molecule 1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1), and E-selectin were assayed by enzyme-linked immunoassay (ELISA) for 50 children with ARF carditis during activity and after improvement and for 50 healthy children as control subjects. After the acute attack, patients were followed up regularly to detect residual valvular lesion. The serum levels of these adhesion molecules were significantly higher in the patients than in the control group (p valvular lesion (ICAM-1, >1,032.3 μg/ml; VCAM-1, >3,662.3 μg/ml; E-selectin, >104.8 μg/ml). Finally, by combining the three adhesion molecules in a single prediction model, the highest area under the curve (AUC) ± standard error (SE) was obtained (0.869 ± 0.052), and the positive likelihood ratio for having a residual valvular lesion was increased (17.33). Levels of serum adhesion molecules could predict residual valvular lesions in RHD patients. The authors recommend that the serum level of adhesion molecules be measured in all cases of ARF carditis.

  13. The prediction of reliability and residual life of reactor pressure components

    International Nuclear Information System (INIS)

    Nemec, J.; Antalovsky, S.

    1978-01-01

    The paper deals with the problem of PWR pressure components reliability and residual life evaluation and prediction. A physical model of damage cumulation which serves as a theoretical basis for all considerations presents two major aspects. The first one describes the dependence of the degree of damage in the crack leading-edge in pressure components on the reactor system load-time history, i.e. on the number of transient loads. Both stages, fatigue crack initiation and growth through the wall until the critical length is reached, are investigated. The crack is supposed to initiate at the flaws in a strength weld joint or in the bimetallic weld of the base ferritic steel and the austenitic stainless overlay cladding. The growth rates of developed cracks are analysed in respect to different load-time histories. Important cyclic properties of some steels are derived from the low-cycle fatigue theory. The second aspect is the load-time history-dependent process of precipitation, deformation and radiation aging, characterized entirely by the critical crack-length value mentioned above. The fracture point, defined by the equation ''crack-length=critical value'' and hence the residual life, can be evaluated using this model and verified by in-service inspection. The physical model described is randomized by considering all the parameters of the model as random. Monte Carlo methods are applied and fatigue crack initiation and growth is simulated. This permits evaluation of the reliability and residual life of the component. The distributions of material and load-time history parameters are needed for such simulation. Both the deterministic and computer-simulated probabilistic predictions of reliability and residual life are verified by prior-to-failure sequential testing of data coming from in-service NDT periodical inspections. (author)

  14. FreeContact: fast and free software for protein contact prediction from residue co-evolution.

    Science.gov (United States)

    Kaján, László; Hopf, Thomas A; Kalaš, Matúš; Marks, Debora S; Rost, Burkhard

    2014-03-26

    20 years of improved technology and growing sequences now renders residue-residue contact constraints in large protein families through correlated mutations accurate enough to drive de novo predictions of protein three-dimensional structure. The method EVfold broke new ground using mean-field Direct Coupling Analysis (EVfold-mfDCA); the method PSICOV applied a related concept by estimating a sparse inverse covariance matrix. Both methods (EVfold-mfDCA and PSICOV) are publicly available, but both require too much CPU time for interactive applications. On top, EVfold-mfDCA depends on proprietary software. Here, we present FreeContact, a fast, open source implementation of EVfold-mfDCA and PSICOV. On a test set of 140 proteins, FreeContact was almost eight times faster than PSICOV without decreasing prediction performance. The EVfold-mfDCA implementation of FreeContact was over 220 times faster than PSICOV with negligible performance decrease. EVfold-mfDCA was unavailable for testing due to its dependency on proprietary software. FreeContact is implemented as the free C++ library "libfreecontact", complete with command line tool "freecontact", as well as Perl and Python modules. All components are available as Debian packages. FreeContact supports the BioXSD format for interoperability. FreeContact provides the opportunity to compute reliable contact predictions in any environment (desktop or cloud).

  15. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  16. Residual Stress Estimation and Fatigue Life Prediction of an Autofrettaged Pressure Vessel

    Energy Technology Data Exchange (ETDEWEB)

    Song, Kyung Jin; Kim, Eun Kyum; Koh, Seung Kee [Kunsan Nat’l Univ., Kunsan (Korea, Republic of)

    2017-09-15

    Fatigue failure of an autofrettaged pressure vessel with a groove at the outside surface occurs owing to the fatigue crack initiation and propagation at the groove root. In order to predict the fatigue life of the autofrettaged pressure vessel, residual stresses in the autofrettaged pressure vessel were evaluated using the finite element method, and the fatigue properties of the pressure vessel steel were obtained from the fatigue tests. Fatigue life of a pressure vessel obtained through summation of the crack initiation and propagation lives was calculated to be 2,598 cycles for an 80% autofrettaged pressure vessel subjected to a pulsating internal pressure of 424 MPa.

  17. EL_PSSM-RT: DNA-binding residue prediction by integrating ensemble learning with PSSM Relation Transformation.

    Science.gov (United States)

    Zhou, Jiyun; Lu, Qin; Xu, Ruifeng; He, Yulan; Wang, Hongpeng

    2017-08-29

    Prediction of DNA-binding residue is important for understanding the protein-DNA recognition mechanism. Many computational methods have been proposed for the prediction, but most of them do not consider the relationships of evolutionary information between residues. In this paper, we first propose a novel residue encoding method, referred to as the Position Specific Score Matrix (PSSM) Relation Transformation (PSSM-RT), to encode residues by utilizing the relationships of evolutionary information between residues. PDNA-62 and PDNA-224 are used to evaluate PSSM-RT and two existing PSSM encoding methods by five-fold cross-validation. Performance evaluations indicate that PSSM-RT is more effective than previous methods. This validates the point that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction. An ensemble learning classifier (EL_PSSM-RT) is also proposed by combining ensemble learning model and PSSM-RT to better handle the imbalance between binding and non-binding residues in datasets. EL_PSSM-RT is evaluated by five-fold cross-validation using PDNA-62 and PDNA-224 as well as two independent datasets TS-72 and TS-61. Performance comparisons with existing predictors on the four datasets demonstrate that EL_PSSM-RT is the best-performing method among all the predicting methods with improvement between 0.02-0.07 for MCC, 4.18-21.47% for ST and 0.013-0.131 for AUC. Furthermore, we analyze the importance of the pair-relationships extracted by PSSM-RT and the results validates the usefulness of PSSM-RT for encoding DNA-binding residues. We propose a novel prediction method for the prediction of DNA-binding residue with the inclusion of relationship of evolutionary information and ensemble learning. Performance evaluation shows that the relationship of evolutionary information between residues is indeed useful in DNA-binding residue prediction and ensemble learning can be used to address the data imbalance

  18. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    ... and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML algorithm, and significance tests done using the Fmax procedure. Phenotypic, additive genetic and residual variances were heterogeneous across production environments.

  19. FunFOLDQA: a quality assessment tool for protein-ligand binding site residue predictions.

    Directory of Open Access Journals (Sweden)

    Daniel B Roche

    Full Text Available The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP experiment, utilizing the Matthews Correlation Coefficient (MCC and Binding-site Distance Test (BDT metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall's τ, Spearman's ρ and Pearson's r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC when Receiver Operator Characteristic (ROC analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%, and one of the top manual groups (FN293 tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site

  20. Influences of model structure and calibration data size on predicting chlorine residuals in water storage tanks.

    Science.gov (United States)

    Hua, Pei; de Oliveira, Keila Roberta Ferreira; Cheung, Peter; Gonçalves, Fábio Veríssimo; Zhang, Jin

    2018-04-09

    This study evaluated the influences of model structure and calibration data size on the modelling performance for the prediction of chlorine residuals in household drinking water storage tanks. The tank models, which consisted of two modules, i.e., hydraulic mixing and water quality modelling processes, were evaluated under identical calibration conditions. The hydraulic mixing modelling processes investigated included the continuously stirred tank reactor (CSTR) and multi-compartment (MC) methods, and the water quality modelling processes included first order (FO), single-reactant second order (SRSO), and variable reaction rate coefficients (VRRC) second order chlorine decay kinetics. Different combinations of these hydraulic mixing and water quality methods formed six tank models. Results show that by applying the same calibration datasets, the tank models that included the MC method for modelling the hydraulic mixing provided better predictions compared to the CSTR method. In terms of water quality modelling, VRRC kinetics showed better predictive abilities compared to FO and SRSO kinetics. It was also found that the overall tank model performance could be substantially improved when a proper method was chosen for the simulation of hydraulic mixing, i.e., the accuracy of the hydraulic mixing modelling plays a critical role in the accuracy of the tank model. Advances in water quality modelling improve the calibration process, i.e., the size of the datasets used for calibration could be reduced when a suitable kinetics method was applied. Although the accuracies of all six models increased with increasing calibration dataset size, the tank model that consisted of the MC and VRRC methods was the most suitable of the tank models as it could satisfactorily predict chlorine residuals in household tanks by using invariant parameters calibrated against the minimum dataset size. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Resilient modulus prediction of soft low-plasticity Piedmont residual soil using dynamic cone penetrometer

    Directory of Open Access Journals (Sweden)

    S. Hamed Mousavi

    2018-04-01

    Full Text Available Dynamic cone penetrometer (DCP has been used for decades to estimate the shear strength and stiffness properties of the subgrade soils. There are several empirical correlations in the literature to predict the resilient modulus values at only a specific stress state from DCP data, corresponding to the predefined thicknesses of pavement layers (a 50 mm asphalt wearing course, a 100 mm asphalt binder course and a 200 mm aggregate base course. In this study, field-measured DCP data were utilized to estimate the resilient modulus of low-plasticity subgrade Piedmont residual soil. Piedmont residual soils are in-place weathered soils from igneous and metamorphic rocks, as opposed to transported or compacted soils. Hence the existing empirical correlations might not be applicable for these soils. An experimental program was conducted incorporating field DCP and laboratory resilient modulus tests on “undisturbed” soil specimens. The DCP tests were carried out at various locations in four test sections to evaluate subgrade stiffness variation laterally and with depth. Laboratory resilient modulus test results were analyzed in the context of the mechanistic-empirical pavement design guide (MEPDG recommended universal constitutive model. A new approach for predicting the resilient modulus from DCP by estimating MEPDG constitutive model coefficients (k1, k2 and k3 was developed through statistical analyses. The new model is capable of not only taking into account the in situ soil condition on the basis of field measurements, but also representing the resilient modulus at any stress state which addresses a limitation with existing empirical DCP models and its applicability for a specific case. Validation of the model is demonstrated by using data that were not used for model development, as well as data reported in the literature. Keywords: Dynamic cone penetrometer (DCP, Resilient modulus, Mechanistic-empirical pavement design guide (MEPDG, Residual

  2. Novel in vitro systems for prediction of veterinary drug residues in ovine milk and dairy products.

    Science.gov (United States)

    González-Lobato, L; Real, R; Herrero, D; de la Fuente, A; Prieto, J G; Marqués, M M; Alvarez, A I; Merino, G

    2014-01-01

    A new in vitro tool was developed for the identification of veterinary substrates of the main drug transporter in the mammary gland. These drugs have a much higher chance of being concentrated into ovine milk and thus should be detectable in dairy products. Complementarily, a cell model for the identification of compounds that can inhibit the secretion of drugs into ovine milk, and thus reduce milk residues, was also generated. The ATP-binding cassette transporter G2 (ABCG2) is responsible for the concentration of its substrates into milk. The need to predict potential drug residues in ruminant milk has prompted the development of in vitro cell models over-expressing ABCG2 for these species to detect veterinary drugs that interact with this transporter. Using these models, several substrates for bovine and caprine ABCG2 have been found, and differences in activity between species have been reported. However, despite being of great toxicological relevance, no suitable in vitro model to predict substrates of ovine ABCG2 was available. New MDCKII and MEF3.8 cell models over-expressing ovine ABCG2 were generated for the identification of substrates and inhibitors of ovine ABCG2. Five widely used veterinary antibiotics (marbofloxacin, orbifloxacin, sarafloxacin, danofloxacin and difloxacin) were discovered as new substrates of ovine ABCG2. These results were confirmed for the bovine transporter and its Y581S variant using previously generated cell models. In addition, the avermectin doramectin was described as a new inhibitor of ruminant ABCG2. This new rapid assay to identify veterinary drugs that can be concentrated into ovine milk will potentially improve detection and monitoring of veterinary drug residues in ovine milk and dairy products.

  3. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    , led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface......Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  4. Accurate prediction of hot spot residues through physicochemical characteristics of amino acid sequences

    KAUST Repository

    Chen, Peng

    2013-07-23

    Hot spot residues of proteins are fundamental interface residues that help proteins perform their functions. Detecting hot spots by experimental methods is costly and time-consuming. Sequential and structural information has been widely used in the computational prediction of hot spots. However, structural information is not always available. In this article, we investigated the problem of identifying hot spots using only physicochemical characteristics extracted from amino acid sequences. We first extracted 132 relatively independent physicochemical features from a set of the 544 properties in AAindex1, an amino acid index database. Each feature was utilized to train a classification model with a novel encoding schema for hot spot prediction by the IBk algorithm, an extension of the K-nearest neighbor algorithm. The combinations of the individual classifiers were explored and the classifiers that appeared frequently in the top performing combinations were selected. The hot spot predictor was built based on an ensemble of these classifiers and to work in a voting manner. Experimental results demonstrated that our method effectively exploited the feature space and allowed flexible weights of features for different queries. On the commonly used hot spot benchmark sets, our method significantly outperformed other machine learning algorithms and state-of-the-art hot spot predictors. The program is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013 Wiley Periodicals, Inc.

  5. Mean Variance Vulnerability

    OpenAIRE

    Thomas Eichner

    2008-01-01

    This paper transfers the concept of Gollier and Pratt's (Gollier, C., J. W. Pratt. 1996. Risk vulnerability and the tempering effect of background risk. Econometrica 64 1109-1123) risk vulnerability into mean variance preferences. Risk vulnerability is shown to be equivalent to the slope of the mean variance indifference curve being decreasing in mean and increasing in variance. Next, we introduce the notion of mean variance vulnerability to link the concepts of decreasing absolute risk avers...

  6. Residual lifetime prediction for lithium-ion battery based on functional principal component analysis and Bayesian approach

    International Nuclear Information System (INIS)

    Cheng, Yujie; Lu, Chen; Li, Tieying; Tao, Laifa

    2015-01-01

    Existing methods for predicting lithium-ion (Li-ion) battery residual lifetime mostly depend on a priori knowledge on aging mechanism, the use of chemical or physical formulation and analytical battery models. This dependence is usually difficult to determine in practice, which restricts the application of these methods. In this study, we propose a new prediction method for Li-ion battery residual lifetime evaluation based on FPCA (functional principal component analysis) and Bayesian approach. The proposed method utilizes FPCA to construct a nonparametric degradation model for Li-ion battery, based on which the residual lifetime and the corresponding confidence interval can be evaluated. Furthermore, an empirical Bayes approach is utilized to achieve real-time updating of the degradation model and concurrently determine residual lifetime distribution. Based on Bayesian updating, a more accurate prediction result and a more precise confidence interval are obtained. Experiments are implemented based on data provided by the NASA Ames Prognostics Center of Excellence. Results confirm that the proposed prediction method performs well in real-time battery residual lifetime prediction. - Highlights: • Capacity is considered functional and FPCA is utilized to extract more information. • No features required which avoids drawbacks induced by feature extraction. • A good combination of both population and individual information. • Avoiding complex aging mechanism and accurate analytical models of batteries. • Easily applicable to different batteries for life prediction and RLD calculation.

  7. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  8. Prediction of Gas Chromatography-Mass Spectrometry Retention Times of Pesticide Residues by Chemometrics Methods

    Directory of Open Access Journals (Sweden)

    Elaheh Konoz

    2013-01-01

    Full Text Available A quantitative structure-retention relationships (QSRRs method is employed to predict the retention time of 300 pesticide residues in animal tissues separated by gas chromatography-mass spectroscopy (GC-MS. Firstly, a six-parameter QSRR model was developed by means of multiple linear regression. The six molecular descriptors that were considered to account for the effect of molecular structure on the retention time are number of nitrogen, Solvation connectivity index-chi 1, Balaban Y index, Moran autocorrelation-lag 2/weighted by atomic Sanderson electronegativity, total absolute charge, and radial distribution function-6.0/unweighted. A 6-7-1 back propagation artificial neural network (ANN was used to improve the accuracy of the constructed model. The standard error values of ANN model for training, test, and validation sets are 1.559, 1.517, and 1.249, respectively, which are less than those obtained reveals by multiple linear regressions model (2.402, 1.858, and 2.036, resp.. Results obtained the reliability and good predictability of nonlinear QSRR model to predict the retention time of pesticides.

  9. A Finite Element Analysis for Predicting the Residual Compression Strength of Impact-Damaged Sandwich Panels

    Science.gov (United States)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  10. A Finite Element Analysis for Predicting the Residual Compressive Strength of Impact-Damaged Sandwich Panels

    Science.gov (United States)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  11. Development of Analytical Method for Predicting Residual Mechanical Properties of Corroded Steel Plates

    Directory of Open Access Journals (Sweden)

    J. M. R. S. Appuhamy

    2011-01-01

    Full Text Available Bridge infrastructure maintenance and assurance of adequate safety is of paramount importance in transportation engineering and maintenance management industry. Corrosion causes strength deterioration, leading to impairment of its operation and progressive weakening of the structure. Since the actual corroded surfaces are different from each other, only experimental approach is not enough to estimate the remaining strength of corroded members. However, in modern practices, numerical simulation is being used to replace the time-consuming and expensive experimental work and to comprehend on the lack of knowledge on mechanical behavior, stress distribution, ultimate behavior, and so on. This paper presents the nonlinear FEM analyses results of many corroded steel plates and compares them with their respective tensile coupon tests. Further, the feasibility of establishing an accurate analytical methodology to predict the residual strength capacities of a corroded steel member with lesser number of measuring points is also discussed.

  12. Prediction and measurement of relieved residual stress by the cryogenic heat treatment for Al6061 alloy: mechanical properties and microstructure

    International Nuclear Information System (INIS)

    Ko, Dae Hoon; Ko, Dae Cheol; Kim, Byung Min; Lim, Hak Jin; Lee, Jung Min

    2013-01-01

    The purpose of this study is to predict the residual stress resulting from the cryogenic heat treatment (CHT) which affects the mechanical properties and microstructure for Al6061 alloy. The CHT is very effective method to reduce the residual stress by quenching media such as liquid nitrogen, boiling water and steam. In this study, experimental T6 and CHT are carried out to measure the temperature of Al parts and to determine the convective heat transfer coefficient. This coefficient is used to predict the residual stress during FE-simulation. In order to consider the relaxation of residual stress during artificial ageing, the Zener-Wert-Avrami function with elasto-plastic nonlinear analysis is used in this study. The predicted residual stress is compared with the measured one by X-ray diffraction (XRD) and is found to be in good agreement with results of the FE-simulation. Further, after T6 and CHT, the electrical conductivity and hardness of the Al6061 alloy are measured to estimate the mechanical properties and its microstructure such as precipitates is observed by Transmission electron microscopy (TEM). Also, the creation of precipitates during T6 and CHT are verified by XRD with component analysis. It is found that CHT affects the residual stress, mechanical properties, and precipitation of the Al 6061 alloy.

  13. Through-Thickness Residual Stress Profiles in Austenitic Stainless Steel Welds: A Combined Experimental and Prediction Study

    Science.gov (United States)

    Mathew, J.; Moat, R. J.; Paddea, S.; Francis, J. A.; Fitzpatrick, M. E.; Bouchard, P. J.

    2017-12-01

    Economic and safe management of nuclear plant components relies on accurate prediction of welding-induced residual stresses. In this study, the distribution of residual stress through the thickness of austenitic stainless steel welds has been measured using neutron diffraction and the contour method. The measured data are used to validate residual stress profiles predicted by an artificial neural network approach (ANN) as a function of welding heat input and geometry. Maximum tensile stresses with magnitude close to the yield strength of the material were observed near the weld cap in both axial and hoop direction of the welds. Significant scatter of more than 200 MPa was found within the residual stress measurements at the weld center line and are associated with the geometry and welding conditions of individual weld passes. The ANN prediction is developed in an attempt to effectively quantify this phenomenon of `innate scatter' and to learn the non-linear patterns in the weld residual stress profiles. Furthermore, the efficacy of the ANN method for defining through-thickness residual stress profiles in welds for application in structural integrity assessments is evaluated.

  14. Quantifying Cognitive Reserve in Older Adults by Decomposing Episodic Memory Variance: Replication and Extension

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Siedlecki, Karen L.; DeCarli, Charles; Stern, Yaakov

    2013-01-01

    The theory of cognitive reserve attempts to explain why some individuals are more resilient to age-related brain pathology. Efforts to explore reserve have been hindered by measurement difficulties. Reed et al. (2010) proposed quantifying reserve as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). This residual variance represents the discrepancy between an individual’s predicted and actual memory performance. The goals of the present study were to extend these methods to a larger, community-based sample and to investigate whether the residual reserve variable is explained by age, predicts longitudinal changes in language, and predicts dementia conversion independent of age. Results support this operational measure of reserve. The residual reserve variable was associated with higher reading ability, lower likelihood of meeting criteria for mild cognitive impairment, lower odds of dementia conversion independent of age, and less decline in language abilities over 3 years. Finally, the residual reserve variable moderated the negative impact of memory variance explained by brain pathology on language decline. This method has the potential to facilitate research on the mechanisms of cognitive reserve and the efficacy of interventions designed to impart reserve. PMID:23866160

  15. Influence of Finite Element Size in Residual Strength Prediction of Composite Structures

    Science.gov (United States)

    Satyanarayana, Arunkumar; Bogert, Philip B.; Karayev, Kazbek Z.; Nordman, Paul S.; Razi, Hamid

    2012-01-01

    The sensitivity of failure load to the element size used in a progressive failure analysis (PFA) of carbon composite center notched laminates is evaluated. The sensitivity study employs a PFA methodology previously developed by the authors consisting of Hashin-Rotem intra-laminar fiber and matrix failure criteria and a complete stress degradation scheme for damage simulation. The approach is implemented with a user defined subroutine in the ABAQUS/Explicit finite element package. The effect of element size near the notch tips on residual strength predictions was assessed for a brittle failure mode with a parametric study that included three laminates of varying material system, thickness and stacking sequence. The study resulted in the selection of an element size of 0.09 in. X 0.09 in., which was later used for predicting crack paths and failure loads in sandwich panels and monolithic laminated panels. Comparison of predicted crack paths and failure loads for these panels agreed well with experimental observations. Additionally, the element size vs. normalized failure load relationship, determined in the parametric study, was used to evaluate strength-scaling factors for three different element sizes. The failure loads predicted with all three element sizes provided converged failure loads with respect to that corresponding with the 0.09 in. X 0.09 in. element size. Though preliminary in nature, the strength-scaling concept has the potential to greatly reduce the computational time required for PFA and can enable the analysis of large scale structural components where failure is dominated by fiber failure in tension.

  16. SucStruct: Prediction of succinylated lysine residues by using structural properties of amino acids.

    Science.gov (United States)

    López, Yosvany; Dehzangi, Abdollah; Lal, Sunil Pranit; Taherzadeh, Ghazaleh; Michaelson, Jacob; Sattar, Abdul; Tsunoda, Tatsuhiko; Sharma, Alok

    2017-06-15

    Post-Translational Modification (PTM) is a biological reaction which contributes to diversify the proteome. Despite many modifications with important roles in cellular activity, lysine succinylation has recently emerged as an important PTM mark. It alters the chemical structure of lysines, leading to remarkable changes in the structure and function of proteins. In contrast to the huge amount of proteins being sequenced in the post-genome era, the experimental detection of succinylated residues remains expensive, inefficient and time-consuming. Therefore, the development of computational tools for accurately predicting succinylated lysines is an urgent necessity. To date, several approaches have been proposed but their sensitivity has been reportedly poor. In this paper, we propose an approach that utilizes structural features of amino acids to improve lysine succinylation prediction. Succinylated and non-succinylated lysines were first retrieved from 670 proteins and characteristics such as accessible surface area, backbone torsion angles and local structure conformations were incorporated. We used the k-nearest neighbors cleaning treatment for dealing with class imbalance and designed a pruned decision tree for classification. Our predictor, referred to as SucStruct (Succinylation using Structural features), proved to significantly improve performance when compared to previous predictors, with sensitivity, accuracy and Mathew's correlation coefficient equal to 0.7334-0.7946, 0.7444-0.7608 and 0.4884-0.5240, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A New Stochastic Approach to Predict Peak and Residual Shear Strength of Natural Rock Discontinuities

    Science.gov (United States)

    Casagrande, D.; Buzzi, O.; Giacomini, A.; Lambert, C.; Fenton, G.

    2018-01-01

    Natural discontinuities are known to play a key role in the stability of rock masses. However, it is a non-trivial task to estimate the shear strength of large discontinuities. Because of the inherent complexity to access to the full surface of the large in situ discontinuities, researchers or engineers tend to work on small-scale specimens. As a consequence, the results are often plagued by the well-known scale effect. A new approach is here proposed to predict shear strength of discontinuities. This approach has the potential to avoid the scale effect. The rationale of the approach is as follows: a major parameter that governs the shear strength of a discontinuity within a rock mass is roughness, which can be accounted for by surveying the discontinuity surface. However, this is typically not possible for discontinuities contained within the rock mass where only traces are visible. For natural surfaces, it can be assumed that traces are, to some extent, representative of the surface. It is here proposed to use the available 2D information (from a visible trace, referred to as a seed trace) and a random field model to create a large number of synthetic surfaces (3D data sets). The shear strength of each synthetic surface can then be estimated using a semi-analytical model. By using a large number of synthetic surfaces and a Monte Carlo strategy, a meaningful shear strength distribution can be obtained. This paper presents the validation of the semi-analytical mechanistic model required to support the new approach for prediction of discontinuity shear strength. The model can predict both peak and residual shear strength. The second part of the paper lays the foundation of a random field model to support the creation of synthetic surfaces having statistical properties in line with those of the data of the seed trace. The paper concludes that it is possible to obtain a reasonable estimate of peak and residual shear strength of the discontinuities tested from the

  18. A simulation model for the prediction of tissue:plasma partition coefficients for drug residues in natural casings.

    Science.gov (United States)

    Haritova, Aneliya Milanova; Fink-Gremmels, Johanna

    2010-09-01

    Tissue residues arise from the exposure of animals to undesirable substances in animal feed materials and drinking water and to the therapeutic or zootechnical use of veterinary medicinal products. In the framework of this study, an advanced toxicokinetic model was developed to predict the likelihood of residue disposition of licensed veterinary products in natural casings used as envelope for a variety of meat products, such as sausages. The model proved suitable for the calculation of drug concentrations in the muscles of pigs, cattle and sheep, the major species of which intestines are used. On the basis of drug concentrations in muscle tissue, the model allowed a prediction of intestinal concentrations and residues in the intestines that remained equal to or below the concentrations in muscle tissue, the major consumable product of slaughter animals. Subsequently, residues in intestines were found to be below the maximum residue limit value for muscle tissue when drugs were used according to prescribed procedures, including the application of appropriate withdrawal times. Considering the low consumption of natural casings (which represents only about 1-2% of the weight of a normal sausage), it was concluded that the exposure to drug residues from casings is negligible. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Reporting explained variance

    Science.gov (United States)

    Good, Ron; Fletcher, Harold J.

    The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.

  20. A simulation model for the prediction of tissue:plasma partition coefficients for drug residues in natural casings.

    NARCIS (Netherlands)

    Haritova, A.M.; Fink-Gremmels, J.

    2010-01-01

    Tissue residues arise from the exposure of animals to undesirable substances in animal feed materials and drinking water and to the therapeutic or zootechnical use of veterinary medicinal products. In the framework of this study, an advanced toxicokinetic model was developed to predict the

  1. European Network of Excellence on NPP residual lifetime prediction methodologies (NULIFE)

    International Nuclear Information System (INIS)

    Badea, M.; Vidican, D.

    2006-01-01

    Within Europe massive investments in nuclear power have been made to meet present and future energy needs. The majority of nuclear reactors have been operating for longer than 20 years and their continuing safe operation depends crucially on effective lifetime management. Furthermore, to extend the economic return on investment and environmental benefits, it is necessary to ensure in advance the safe operation of nuclear reactors for 60 years, a period which is typically 20 years in excess of nominal design life. This depends on a clear understanding of, and predictive capability for, how safety margins may be maintained as components degrade under operational conditions. Ageing mechanisms, environment effects and complex loadings increase the likelihood of damage to safety relevant systems, structures and components. The ability to claim increased benefits from reduced conservatism via improved assessments is therefore of great value. Harmonisation and qualification are essential for industrial exploitation of approaches developed for life prediction methodology. Several European organisations and networks have been at the forefront of the development of advanced methodologies in this area. However, these efforts have largely been made at national level and their overall impact and benefit (in comparison to the situation in the USA) has been reduced by fragmentation. There is a need to restructure the networking approach in order to create a single organisational entity capable of working at European level to produce and exploit R and D in support of the safe and competitive operation of nuclear power plants. It is also critical to ensure the competitiveness of European plant life management (PLIM) services at international level, in particular with the USA and Asian countries. To the above challenges the European Network on European research in residual lifetime prediction methodologies (NULIFE) will: - Create a Europe-wide body in order to achieve scientific and

  2. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  3. FEA predictions of residual stress in stainless steel compared to neutron and x-ray diffraction measurements. [Finite element analysis

    Energy Technology Data Exchange (ETDEWEB)

    Flower, E.C.; MacEwen, S.R.; Holden, T.M.

    1987-05-01

    Residual stresses in a body arise from nonuniform plastic deformation and continue to be an important consideration in the design and the fabrication of metal components. The finite element method offers a potentially powerful tool for predicting these stresses. However, it is important to first verify this method through careful analysis and experimentation. This paper describes experiments using neutron and x-ray diffraction to provide quantitative data to compare to finite element analysis predictions of deformation induced residual stress in a plane stress austenitic stainless steel ring. Good agreement was found between the experimental results and the numerical predictions. Effects of the formulation of the finite element model on the analysis, constitutive parameters and effects of machining damage in the experiments are addressed.

  4. FEA predictions of residual stress in stainless steel compared to neutron and x-ray diffraction measurements

    International Nuclear Information System (INIS)

    Flower, E.C.; MacEwen, S.R.; Holden, T.M.

    1987-05-01

    Residual stresses in a body arise from nonuniform plastic deformation and continue to be an important consideration in the design and the fabrication of metal components. The finite element method offers a potentially powerful tool for predicting these stresses. However, it is important to first verify this method through careful analysis and experimentation. This paper describes experiments using neutron and x-ray diffraction to provide quantitative data to compare to finite element analysis predictions of deformation induced residual stress in a plane stress austenitic stainless steel ring. Good agreement was found between the experimental results and the numerical predictions. Effects of the formulation of the finite element model on the analysis, constitutive parameters and effects of machining damage in the experiments are addressed

  5. HemeBIND: a novel method for heme binding residue prediction by combining structural and sequence information

    Directory of Open Access Journals (Sweden)

    Hu Jianjun

    2011-05-01

    Full Text Available Abstract Background Accurate prediction of binding residues involved in the interactions between proteins and small ligands is one of the major challenges in structural bioinformatics. Heme is an essential and commonly used ligand that plays critical roles in electron transfer, catalysis, signal transduction and gene expression. Although much effort has been devoted to the development of various generic algorithms for ligand binding site prediction over the last decade, no algorithm has been specifically designed to complement experimental techniques for identification of heme binding residues. Consequently, an urgent need is to develop a computational method for recognizing these important residues. Results Here we introduced an efficient algorithm HemeBIND for predicting heme binding residues by integrating structural and sequence information. We systematically investigated the characteristics of binding interfaces based on a non-redundant dataset of heme-protein complexes. It was found that several sequence and structural attributes such as evolutionary conservation, solvent accessibility, depth and protrusion clearly illustrate the differences between heme binding and non-binding residues. These features can then be separately used or combined to build the structure-based classifiers using support vector machine (SVM. The results showed that the information contained in these features is largely complementary and their combination achieved the best performance. To further improve the performance, an attempt has been made to develop a post-processing procedure to reduce the number of false positives. In addition, we built a sequence-based classifier based on SVM and sequence profile as an alternative when only sequence information can be used. Finally, we employed a voting method to combine the outputs of structure-based and sequence-based classifiers, which demonstrated remarkably better performance than the individual classifier alone

  6. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. © 2013 Wiley Periodicals, Inc.

  7. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  8. The Effect of Welding-Pass Grouping on the Prediction Accuracy of Residual Stress in Multipass Butt Welding

    Directory of Open Access Journals (Sweden)

    Jeongung Park

    2017-01-01

    Full Text Available The residual stress analysis of a thick welded structure requires a lot of time and computer memory, which are different from those in thin welded structure analysis. This study investigated the effect of residual stress due to welding-pass grouping as a way to reduce the analysis time in multipass thick butt welding joint. For this purpose, the parametric analysis which changes the number of grouping passes was conducted in the multipass butt weld of a structure with a thickness of 25 mm and 70 mm. In addition, the residual stress by thermal elastoplastic FE analysis is compared with the results by the neutron diffraction method for verifying the reliability of the FE analysis. The welding sequence is considered in order to predict the residual stress more accurately when using welding-pass grouping method. The results of the welding-pass grouping model and half model occurred between the results of the left/right of the full model. If the total number of welding-pass grouping is less than half of that of welding pass, a large difference with real residual stress is found. Therefore, the total number of the welding-pass grouping should not be reduced to more than half.

  9. Prediction of residual stress in the welding zone of dissimilar metals using data-based models and uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Dong Hyuk; Bae, In Ho [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of); Na, Man Gyun, E-mail: magyna@chosun.ac.k [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of); Kim, Jin Weon [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of)

    2010-10-15

    Since welding residual stress is one of the major factors in the generation of primary water stress-corrosion cracking (PWSCC), it is essential to examine the welding residual stress to prevent PWSCC. Therefore, several artificial intelligence methods have been developed and studied to predict these residual stresses. In this study, three data-based models, support vector regression (SVR), fuzzy neural network (FNN), and their combined (FNN + SVR) models were used to predict the residual stress for dissimilar metal welding under a variety of welding conditions. By using a subtractive clustering (SC) method, informative data that demonstrate the characteristic behavior of the system were selected to train the models from the numerical data obtained from finite element analysis under a range of welding conditions. The FNN model was optimized using a genetic algorithm. The statistical and analytical uncertainty analysis methods of the models were applied, and their uncertainties were evaluated using 60 sampled training and optimization data sets, as well as a fixed test data set.

  10. Development of computer program ENMASK for prediction of residual environmental masking-noise spectra, from any three independent environmental parameters

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.-S.; Liebich, R. E.; Chun, K. C.

    2000-03-31

    Residual environmental sound can mask intrusive4 (unwanted) sound. It is a factor that can affect noise impacts and must be considered both in noise-impact studies and in noise-mitigation designs. Models for quantitative prediction of sensation level (audibility) and psychological effects of intrusive noise require an input with 1/3 octave-band spectral resolution of environmental masking noise. However, the majority of published residual environmental masking-noise data are given with either octave-band frequency resolution or only single A-weighted decibel values. A model has been developed that enables estimation of 1/3 octave-band residual environmental masking-noise spectra and relates certain environmental parameters to A-weighted sound level. This model provides a correlation among three environmental conditions: measured residual A-weighted sound-pressure level, proximity to a major roadway, and population density. Cited field-study data were used to compute the most probable 1/3 octave-band sound-pressure spectrum corresponding to any selected one of these three inputs. In turn, such spectra can be used as an input to models for prediction of noise impacts. This paper discusses specific algorithms included in the newly developed computer program ENMASK. In addition, the relative audibility of the environmental masking-noise spectra at different A-weighted sound levels is discussed, which is determined by using the methodology of program ENAUDIBL.

  11. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.

    Science.gov (United States)

    Ma, Xin; Guo, Jing; Sun, Xiao

    2016-01-01

    DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

  12. Toward an accurate prediction of inter-residue distances in proteins using 2D recursive neural networks.

    Science.gov (United States)

    Kukic, Predrag; Mirabello, Claudio; Tradigo, Giuseppe; Walsh, Ian; Veltri, Pierangelo; Pollastri, Gianluca

    2014-01-10

    Protein inter-residue contact maps provide a translation and rotation invariant topological representation of a protein. They can be used as an intermediary step in protein structure predictions. However, the prediction of contact maps represents an unbalanced problem as far fewer examples of contacts than non-contacts exist in a protein structure.In this study we explore the possibility of completely eliminating the unbalanced nature of the contact map prediction problem by predicting real-value distances between residues. Predicting full inter-residue distance maps and applying them in protein structure predictions has been relatively unexplored in the past. We initially demonstrate that the use of native-like distance maps is able to reproduce 3D structures almost identical to the targets, giving an average RMSD of 0.5Å. In addition, the corrupted physical maps with an introduced random error of ±6Å are able to reconstruct the targets within an average RMSD of 2Å.After demonstrating the reconstruction potential of distance maps, we develop two classes of predictors using two-dimensional recursive neural networks: an ab initio predictor that relies only on the protein sequence and evolutionary information, and a template-based predictor in which additional structural homology information is provided. We find that the ab initio predictor is able to reproduce distances with an RMSD of 6Å, regardless of the evolutionary content provided. Furthermore, we show that the template-based predictor exploits both sequence and structure information even in cases of dubious homology and outperforms the best template hit with a clear margin of up to 3.7Å.Lastly, we demonstrate the ability of the two predictors to reconstruct the CASP9 targets shorter than 200 residues producing the results similar to the state of the machine learning art approach implemented in the Distill server. The methodology presented here, if complemented by more complex reconstruction protocols

  13. Fractal structure and predictive strategy of the daily extreme temperature residuals at Fabra Observatory (NE Spain, years 1917-2005)

    Science.gov (United States)

    Lana, X.; Burgueño, A.; Serra, C.; Martínez, M. D.

    2015-07-01

    A compilation of daily extreme temperatures recorded at the Fabra Observatory (Catalonia, NE Spain) since 1917 up to 2005 has permitted an exhaustive analysis of the fractal behaviour of the daily extreme temperature residuals, DTR, defined as the difference between the observed daily extreme temperature and the daily average value. The lacunarity characterises the lag distribution on the residual series for several thresholds. Hurst, H, and Hausdorff, Ha, exponents, together with the exponent β of the decaying power law, describing the evolution of power spectral density with frequency, permit to characterise the persistence, antipersistence or randomness of the residual series. The self-affine character of DTR series is verified, and additionally, they are simulated by means of fractional Gaussian noise, fGn. The reconstruction theorem leads to the quantification of the complexity (correlation dimension, μ*, and Kolmogorov entropy, κ) and predictive instability (Lyapunov exponents, λ, and Kaplan-Yorke dimension, D KY) of the residual series. All fractal parameters are computed for consecutive and independent segments of 5-year lengths. This strategy permits to obtain a high enough number of fractal parameter samples to estimate time trends, including their statistical significance. Comparisons are made between results of predictive algorithms based on fGn models and an autoregressive autoregressive integrated moving average (ARIMA) process, with the latter leading to slightly better results than the former. Several dynamic atmospheric mechanisms and local effects, such as local topography and vicinity to the Mediterranean coast, are proposed to explain the complex and instable predictability of DTR series. The memory of the physical system (Kolmogorov entropy) would be attributed to the interaction with the Mediterranean Sea.

  14. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  15. Incorporating residual temperature and specific humidity in predicting weather-dependent warm-season electricity consumption

    Science.gov (United States)

    Guan, Huade; Beecham, Simon; Xu, Hanqiu; Ingleton, Greg

    2017-02-01

    Climate warming and increasing variability challenges the electricity supply in warm seasons. A good quantitative representation of the relationship between warm-season electricity consumption and weather condition provides necessary information for long-term electricity planning and short-term electricity management. In this study, an extended version of cooling degree days (ECDD) is proposed for better characterisation of this relationship. The ECDD includes temperature, residual temperature and specific humidity effects. The residual temperature is introduced for the first time to reflect the building thermal inertia effect on electricity consumption. The study is based on the electricity consumption data of four multiple-street city blocks and three office buildings. It is found that the residual temperature effect is about 20% of the current-day temperature effect at the block scale, and increases with a large variation at the building scale. Investigation of this residual temperature effect provides insight to the influence of building designs and structures on electricity consumption. The specific humidity effect appears to be more important at the building scale than at the block scale. A building with high energy performance does not necessarily have low specific humidity dependence. The new ECDD better reflects the weather dependence of electricity consumption than the conventional CDD method.

  16. Prediction of residual stress for dissimilar metals welding at nuclear power plants using fuzzy neural network models

    International Nuclear Information System (INIS)

    Na, Man Gyun; Kim, Jin Weon; Lim, Dong Hyuk

    2007-01-01

    A fuzzy neural network model is presented to predict residual stress for dissimilar metal welding under various welding conditions. The fuzzy neural network model, which consists of a fuzzy inference system and a neuronal training system, is optimized by a hybrid learning method that combines a genetic algorithm to optimize the membership function parameters and a least squares method to solve the consequent parameters. The data of finite element analysis are divided into four data groups, which are split according to two end-section constraints and two prediction paths. Four fuzzy neural network models were therefore applied to the numerical data obtained from the finite element analysis for the two end-section constraints and the two prediction paths. The fuzzy neural network models were trained with the aid of a data set prepared for training (training data), optimized by means of an optimization data set and verified by means of a test data set that was different (independent) from the training data and the optimization data. The accuracy of fuzzy neural network models is known to be sufficiently accurate for use in an integrity evaluation by predicting the residual stress of dissimilar metal welding zones

  17. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  18. Normalised Degree Variance

    OpenAIRE

    Smith, Keith; Escudero, Javier

    2018-01-01

    Finding graph indices which are unbiased to network size is of high importance both within a given field and across fields for enhancing the comparability over the cornucopia of modern network science studies as well as in subnetwork comparisons of the same network. The degree variance is an important metric for characterising graph heterogeneity and hub dominance, however this clearly depends on the largest and smallest degrees of the graph which depends on network size. Here, we provide an ...

  19. A new definition of nonlinear statistics mean and variance

    OpenAIRE

    Chen, W.

    1999-01-01

    This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.

  20. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  1. Predicting Performance in Art College: How Useful Are the Entry Portfolio and Other Variables in Explaining Variance in First Year Marks?

    Science.gov (United States)

    O'Donoghue, Donal

    2009-01-01

    This article examines if and to what extent a set of pre-enrolment variables and background characteristics predict first year performance in art college. The article comes from a four-year longitudinal study that followed a cohort of tertiary art entrants in Ireland from their time of entry in 2002 to their time of exit in 2006 (or before, for…

  2. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  3. Temperature Field Prediction for Determining the Residual Stresses Under Heat Treatment of Aluminum Alloys

    Directory of Open Access Journals (Sweden)

    A. V. Livshits

    2014-01-01

    Full Text Available The article is devoted to non-stationary temperature field blanks from aluminum alloys during heat treatment. It consists of the introduction and two smaller paragraphs. In the introduction the author concerns the influence of residual stresses arising in the manufacturing process of details, on the strength of the whole aircraft construction and, consequently, on their technical and economic parameters, such as weight, reliability, efficiency, and cost. He also notes that the residual stresses appeared during the production of parts change their location, size and direction under the influence of the elastic deformations that occur during the exploitation of aircraft. Redistributed residual stresses may have a chaotic distribution that may cause overlap of these stresses on the stresses caused by the impact of workload of constructions and destruction or damage of aircraft components.The first paragraph is devoted to the existing methods and techniques for determining the residual stresses. The presented methods and techniques are analyzed to show the advantages and disadvantages of each of them. The conclusion is drawn that the method to determine the residual stresses is necessary, its cost is less than those of existing ones, and an error does not exceed 10%.In the second section, the author divides the problem of determining the residual stresses into two parts, and describes the solution methods of the first one. The first problem is to define the temperature field of the work piece. The author uses a Fourier equation with the definition of initial and boundary conditions to describe a mathematical model of the heat cycle of work piece cooling. He draws special attention here to the fact that it is complicated to determine the heat transfer coefficient, which characterizes the process of cooling the work piece during hardening because of its dependence on a number of factors, such as changing temperature-dependent material properties of

  4. Intraoperative Arthrogram Predicts Residual Dysplasia after Successful Closed Reduction of DDH.

    Science.gov (United States)

    Zhang, Zhong-Li; Fu, Zhe; Yang, Jian-Ping; Wang, Kan; Xie, Li-Wei; Deng, Shu-Zhen; Chen, Zhao-Qiang

    2016-08-01

    To determine the incidence of residual dysplasia after closed reduction (CR) of developmental dysplasia of the hip (DDH) and assess correlations between quality of arthrogram-guided CR and residual dysplasia using a new intraoperative radiographic criterion. Data of a consecutive series of 126 patients with DDH in 139 hips treated at our institution by arthrogram-guided CR from March 2006 to June 2013 were reviewed in this retrospective study. There were 23 boys and 103 girls with 88 affected left hips and 51 right hips. The average age at closed reduction was 14 months (range, 7-19 months) and average duration of follow-up 36 months (range, 24-100 months). Femoral head coverage (FHC) and arthrography type (A/B/C) on best reduced arthrographic images, acetabular index (AI) and Wiberg Center-Edge (CE) angle on anteroposterior (AP) pelvis radiograph at latest follow-up were measured. Residual hip dysplasia was determined according to the Harcke acetabular dysplasia radiographic standard. Patients were divided into non-late acetabular dysplasia (non-LACD) and late acetabular dysplasia (LACD) groups according to final results and age at reduction, sex and side compared between these two groups. Correlations between FHC and arthrography type and residual hip dysplasia were analyzed. Multiple logistic regression analysis was used to analyze sex, AI at CR, arthrography type and FHC with LACD. Receiver operating characteristic (ROC) curve analysis was used to determine the cutoff value for FHC. Forty-five of 139 hips (32.4%) had residual hip dysplasia. Avascular necrosis of the femoral head occurred in 11 hips (7.9%), nine of which had acetabular dysplasia. There were no significant differences between the two groups in age at reduction, sex or side. FHC differed significantly between the two groups (51.2% ± 15.3% vs . 28.5% ± 15.9%, t = 4.718, P = 0.000). A significantly greater percentage of the arthrography Type C group than Type A and B groups had LACD (χ(2) = 17

  5. COMSAT: Residue contact prediction of transmembrane proteins based on support vector machines and mixed integer linear programming.

    Science.gov (United States)

    Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A

    2016-03-01

    In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.

  6. Prediction of process induced shape distortions and residual stresses in large fibre reinforced composite laminates

    DEFF Research Database (Denmark)

    Nielsen, Michael Wenani

    The present thesis is devoted to numerical modelling of thermomechanical phenomena occurring during curing in the manufacture of large fibre reinforced polymer matrix composites with thick laminate sections using vacuum assisted resin transfer moulding (VARTM). The main application of interest...... in this work is modelling manufacturing induced shape distortions and residual stresses in commercial wind turbine composite blades. Key mechanisms known to contribute to shape distortions and residual stress build-up are reviewed and the underlying theories used to model these mechanisms are presented....... The main mechanisms of thermal-, chemical- and mechanical origin are; (i) the thermal expansion mismatch of the constitutive composite materials, layer and tooling, (ii) chemical cure shrinkage of the composite matrix material and (iii) the tooling (i.e. the mould, inserts etc.) influence on the composite...

  7. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  8. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Directory of Open Access Journals (Sweden)

    Carlo Baldassi

    Full Text Available In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids, exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i the prediction of residue-residue contacts in proteins, and (ii the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  9. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Science.gov (United States)

    Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  10. Prediction of hydrodynamic and other solution properties of rigid proteins from atomic- and residue-level models.

    Science.gov (United States)

    Ortega, A; Amorós, D; García de la Torre, J

    2011-08-17

    Here we extend the ability to predict hydrodynamic coefficients and other solution properties of rigid macromolecular structures from atomic-level structures, implemented in the computer program HYDROPRO, to models with lower, residue-level resolution. Whereas in the former case there is one bead per nonhydrogen atom, the latter contains one bead per amino acid (or nucleotide) residue, thus allowing calculations when atomic resolution is not available or coarse-grained models are preferred. We parameterized the effective hydrodynamic radius of the elements in the atomic- and residue-level models using a very large set of experimental data for translational and rotational coefficients (intrinsic viscosity and radius of gyration) for >50 proteins. We also extended the calculations to very large proteins and macromolecular complexes, such as the whole 70S ribosome. We show that with proper parameterization, the two levels of resolution yield similar and rather good agreement with experimental data. The new version of HYDROPRO, in addition to considering various computational and modeling schemes, is far more efficient computationally and can be handled with the use of a graphical interface. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Circulating microRNA levels predict residual beta cell function and glycaemic control in children with type 1 diabetes mellitus

    DEFF Research Database (Denmark)

    Samandari, Nasim; Mirza, Aashiq H.; Nielsen, Lotte B.

    2017-01-01

    from the Danish Remission Phase Cohort, and profiled for miRNAs. At the same time points, meal-stimulated C-peptide and HbA1c levels were measured and insulin-dose adjusted HbA1c (IDAA1c) calculated. miRNAs that at 3 months after diagnosis predicted residual beta cell function and glycaemic control...... in this subgroup were further validated in the remaining cohort (n = 83). Statistical analysis of miRNA prediction for disease progression was performed by multiple linear regression analysis adjusted for age and sex. Results: In the discovery analysis, six miRNAs (hsa-miR-24-3p, hsa-miR-146a-5p, hsa-miR-194-5p...

  12. On the Rule of Mixtures for Predicting Stress-Softening and Residual Strain Effects in Biological Tissues and Biocompatible Materials

    Directory of Open Access Journals (Sweden)

    Alex Elías-Zúñiga

    2014-01-01

    Full Text Available In this work, we use the rule of mixtures to develop an equivalent material model in which the total strain energy density is split into the isotropic part related to the matrix component and the anisotropic energy contribution related to the fiber effects. For the isotropic energy part, we select the amended non-Gaussian strain energy density model, while the energy fiber effects are added by considering the equivalent anisotropic volumetric fraction contribution, as well as the isotropized representation form of the eight-chain energy model that accounts for the material anisotropic effects. Furthermore, our proposed material model uses a phenomenological non-monotonous softening function that predicts stress softening effects and has an energy term, derived from the pseudo-elasticity theory, that accounts for residual strain deformations. The model’s theoretical predictions are compared with experimental data collected from human vaginal tissues, mice skin, poly(glycolide-co-caprolactone (PGC25 3-0 and polypropylene suture materials and tracheal and brain human tissues. In all cases examined here, our equivalent material model closely follows stress-softening and residual strain effects exhibited by experimental data.

  13. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  14. A noise variance estimation approach for CT

    Science.gov (United States)

    Shen, Le; Jin, Xin; Xing, Yuxiang

    2012-10-01

    The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.

  15. Residual spinothalamic tract pathways predict development of central pain after spinal cord injury.

    Science.gov (United States)

    Wasner, Gunnar; Lee, Bonsan Bonne; Engel, Stella; McLachlan, Elspeth

    2008-09-01

    Central neuropathic pain following lesions within the CNS, such as spinal cord injury, is one of the most excruciating types of chronic pain and one of the most difficult to treat. The role of spinothalamic pathways in this type of pain is not clear. Previous studies suggested that spinothalamic tract lesions are necessary but not sufficient for development of central pain, since deficits of spinothalamic function were equally severe in spinal cord injured people with and without pain. The aim of the present study was to examine spinothalamic tract function by quantitative sensory testing before and after activation and sensitization of small diameter afferents by applying menthol, histamine or capsaicin to the distal skin areas where spontaneous pain was localized. Investigations were performed in matched groups each of 12 patients with and without central pain below the level of a clinically complete spinal cord injury, and in 12 able-bodied controls. To test peripheral C fibre function, axon reflex vasodilations induced by histamine and capsaicin applications were quantified. In eight patients with pain, sensations of the same quality as one of their major individual pain sensations were rekindled by heat stimuli in combination with topical capsaicin (n = 7) or by cold stimuli (n = 1). No sensations were evoked in pain-free patients (P central pain from those without. The ability to mimic chronic pain sensations by activation of thermosensory nociceptive neurons implies that ongoing activity in these residual spinothalamic pathways plays a crucial role in maintaining central pain. We propose that processes associated with degeneration of neighbouring axons within the tract, such as inflammation, may trigger spontaneous activity in residual intact neurons that act as a 'central pain generator' after spinal cord injury.

  16. Multi-level learning: improving the prediction of protein, domain and residue interactions by allowing information flow between levels.

    Science.gov (United States)

    Yip, Kevin Y; Kim, Philip M; McDermott, Drew; Gerstein, Mark

    2009-08-05

    Proteins interact through specific binding interfaces that contain many residues in domains. Protein interactions thus occur on three different levels of a concept hierarchy: whole-proteins, domains, and residues. Each level offers a distinct and complementary set of features for computationally predicting interactions, including functional genomic features of whole proteins, evolutionary features of domain families and physical-chemical features of individual residues. The predictions at each level could benefit from using the features at all three levels. However, it is not trivial as the features are provided at different granularity. To link up the predictions at the three levels, we propose a multi-level machine-learning framework that allows for explicit information flow between the levels. We demonstrate, using representative yeast interaction networks, that our algorithm is able to utilize complementary feature sets to make more accurate predictions at the three levels than when the three problems are approached independently. To facilitate application of our multi-level learning framework, we discuss three key aspects of multi-level learning and the corresponding design choices that we have made in the implementation of a concrete learning algorithm. 1) Architecture of information flow: we show the greater flexibility of bidirectional flow over independent levels and unidirectional flow; 2) Coupling mechanism of the different levels: We show how this can be accomplished via augmenting the training sets at each level, and discuss the prevention of error propagation between different levels by means of soft coupling; 3) Sparseness of data: We show that the multi-level framework compounds data sparsity issues, and discuss how this can be dealt with by building local models in information-rich parts of the data. Our proof-of-concept learning algorithm demonstrates the advantage of combining levels, and opens up opportunities for further research. The software

  17. Multi-level learning: improving the prediction of protein, domain and residue interactions by allowing information flow between levels

    Directory of Open Access Journals (Sweden)

    McDermott Drew

    2009-08-01

    Full Text Available Abstract Background Proteins interact through specific binding interfaces that contain many residues in domains. Protein interactions thus occur on three different levels of a concept hierarchy: whole-proteins, domains, and residues. Each level offers a distinct and complementary set of features for computationally predicting interactions, including functional genomic features of whole proteins, evolutionary features of domain families and physical-chemical features of individual residues. The predictions at each level could benefit from using the features at all three levels. However, it is not trivial as the features are provided at different granularity. Results To link up the predictions at the three levels, we propose a multi-level machine-learning framework that allows for explicit information flow between the levels. We demonstrate, using representative yeast interaction networks, that our algorithm is able to utilize complementary feature sets to make more accurate predictions at the three levels than when the three problems are approached independently. To facilitate application of our multi-level learning framework, we discuss three key aspects of multi-level learning and the corresponding design choices that we have made in the implementation of a concrete learning algorithm. 1 Architecture of information flow: we show the greater flexibility of bidirectional flow over independent levels and unidirectional flow; 2 Coupling mechanism of the different levels: We show how this can be accomplished via augmenting the training sets at each level, and discuss the prevention of error propagation between different levels by means of soft coupling; 3 Sparseness of data: We show that the multi-level framework compounds data sparsity issues, and discuss how this can be dealt with by building local models in information-rich parts of the data. Our proof-of-concept learning algorithm demonstrates the advantage of combining levels, and opens up

  18. COPRED: prediction of fold, GO molecular function and functional residues at the domain level.

    Science.gov (United States)

    López, Daniel; Pazos, Florencio

    2013-07-15

    Only recently the first resources devoted to the functional annotation of proteins at the domain level started to appear. The next step is to develop specific methodologies for predicting function at the domain level based on these resources, and to implement them in web servers to be used by the community. In this work, we present COPRED, a web server for the concomitant prediction of fold, molecular function and functional sites at the domain level, based on a methodology for domain molecular function prediction and a resource of domain functional annotations previously developed and benchmarked. COPRED can be freely accessed at http://csbg.cnb.csic.es/copred. The interface works in all standard web browsers. WebGL (natively supported by most browsers) is required for the in-line preview and manipulation of protein 3D structures. The website includes a detailed help section and usage examples. pazos@cnb.csic.es.

  19. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    Science.gov (United States)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Prediction of Collision Cross-Section Values for Small Molecules: Application to Pesticide Residue Analysis.

    Science.gov (United States)

    Bijlsma, Lubertus; Bade, Richard; Celma, Alberto; Mullin, Lauren; Cleland, Gareth; Stead, Sara; Hernandez, Felix; Sancho, Juan V

    2017-06-20

    The use of collision cross-section (CCS) values obtained by ion mobility high-resolution mass spectrometry has added a third dimension (alongside retention time and exact mass) to aid in the identification of compounds. However, its utility is limited by the number of experimental CCS values currently available. This work demonstrates the potential of artificial neural networks (ANNs) for the prediction of CCS values of pesticides. The predictor, based on eight software-chosen molecular descriptors, was optimized using CCS values of 205 small molecules and validated using a set of 131 pesticides. The relative error was within 6% for 95% of all CCS values for protonated molecules, resulting in a median relative error less than 2%. In order to demonstrate the potential of CCS prediction, the strategy was applied to spinach samples. It notably improved the confidence in the tentative identification of suspect and nontarget pesticides.

  1. Thermo-mechanical characterization of a thermoplastic composite and prediction of the residual stresses and lamina curvature during cooling

    Science.gov (United States)

    Péron, Mael; Jacquemin, Frédéric; Casari, Pascal; Orange, Gilles; Bailleul, Jean-Luc; Boyard, Nicolas

    2017-10-01

    The prediction of process induced stresses during the cooling of thermoplastic composites still represents a challenge for the scientific community. However, a precise determination of these stresses is necessary in order to optimize the process conditions and thus lower the stresses effects on the final part health. A model is presented here, that permits the estimation of residual stresses during cooling. It relies on the nonlinear laminate theory, which has been adapted to arbitrary layup sequences. The developed model takes into account the heat transfers through the thickness of the laminate, together with the crystallization kinetics. The development of the composite mechanical properties during cooling is addressed by an incremental linear elastic constitutive law, which also considers thermal and crystallization strains. In order to feed the aforementioned model, a glass fiber and PA6.6 matrix unidirectional (UD) composite has been characterized. This work finally focuses on the identification of the material and process related parameters that lower the residual stresses level, including the ply sequence, the fiber volume fraction and the cooling rate.

  2. Measuring the Allan variance by sinusoidal fitting

    Science.gov (United States)

    DeVoe, Ralph G.

    2018-02-01

    The Allan variance of signal and reference frequencies is measured by a least-square fit of the output of two analog-to-digital converters to ideal sine waves. The difference in the fit phase of the two channels generates the timing data needed for the Allan variance. The fits are performed at the signal frequency (≈10 MHz) without the use of heterodyning. Experimental data from a modified digital oscilloscope yield a residual Allan deviation of 3 × 10-13/τ, where τ is the observation time in s. This corresponds to a standard deviation in time of statistical theory and Monte Carlo simulations which suggest that optimized devices may have one or two orders of magnitude better performance.

  3. Fracture Toughness Prediction under Compressive Residual Stress by Using a Stress-Distribution T-Scaling Method

    Directory of Open Access Journals (Sweden)

    Toshiyuki Meshii

    2017-12-01

    Full Text Available The improvement in the fracture toughness Jc of a material in the ductile-to-brittle transition temperature region due to compressive residual stress (CRS was considered in this study. A straightforward fracture prediction was performed for a specimen with mechanical CRS by using the T-scaling method, which was originally proposed to scale the fracture stress distributions between different temperatures. The method was validated for a 780-MPa-class high-strength steel and 0.45% carbon steel. The results showed that the scaled stress distributions at fracture loads without and with CRS are the same, and that Jc improvement was caused by the loss in the one-to-one correspondence between J and the crack-tip stress distribution. The proposed method is advantageous in possibly predicting fracture loads for specimens with CRS by using only the stress–strain relationship, and by performing elastic-plastic finite element analysis, i.e., without performing fracture toughness testing on specimens without CRS.

  4. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  5. Prospective study on ultrasonography plus plain radiography in predicting residual obstruction after extracorporeal shock wave lithotripsy for ureteral stones.

    Science.gov (United States)

    Cheung, M C; Leung, Y L; Wong, B B W; Chu, S M; Lee, F; Tam, P C

    2002-03-01

    To compare ultrasonography (US) and plain radiography with intravenous urography (IVU) in predicting ureteral obstruction after in situ extracorporeal shock wave lithotripsy (ESWL) for ureteral stones. From April 1998 to September 2000, 100 consecutive patients with solitary ureteral stones were treated by primary in situ ESWL. ESWL failures were salvaged by ureteroscopic lithotripsy. Ninety-three patients completed the follow-up assessment. US and IVU were performed when plain radiography showed no residual stone. The occurrence of hydronephrosis on US was compared with IVU, the reference standard for ureteral obstruction. Of the 93 patients, 72 were men and 21 women (mean age 52 years; mean stone size 11.2 mm). ESWL successfully treated 70 ureteral stones (75%), and the 23 failures were treated by ureteroscopic lithotripsy. Sixty-nine patients without hydronephrosis on US had no ureteral obstruction on IVU. Of the 24 patients who had hydronephrosis on US, 8 had ureteral obstruction on IVU. Of the 85 patients who had no ureteral obstruction on IVU, 69 patients showed no evidence of hydronephrosis on US. However, all patients with ureteral obstruction on IVU demonstrated hydronephrosis on US. The sensitivity, specificity, and positive and negative predictive value concerning sonographic hydronephrosis in the prediction of ureteral obstruction was 100%, 81%, 33%, and 100%, respectively. US alone could not define the cause of ureteral obstruction. Plain abdominal radiography plus US is highly sensitive for screening ureteral obstruction after primary in situ ESWL for ureteral calculi. It can save up to 74% of patients from the potential risk of IVU. The detection of the cause of obstruction by IVU is only necessary when sonographic evidence of hydronephrosis is present.

  6. Statistical inference on variance components

    NARCIS (Netherlands)

    Verdooren, L.R.

    1988-01-01

    In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,

  7. Residue contacts predicted by evolutionary covariance extend the application of ab initio molecular replacement to larger and more challenging protein folds

    OpenAIRE

    Simkovic, Felix; Thomas, Jens M. H.; Keegan, Ronan M.; Winn, Martyn D.; Mayans, Olga; Rigden, Daniel J.

    2016-01-01

    For many protein families, the deluge of new sequence information together with new statistical protocols now allow the accurate prediction of contacting residues from sequence information alone. This offers the possibility of more accurate ab initio (non-homology-based) structure prediction. Such models can be used in structure solution by molecular replacement (MR) where the target fold is novel or is only distantly related to known structures. Here, AMPLE, an MR pipeline that assembles sea...

  8. Angiographically demonstrated coronary collaterals predict residual viable myocardium in patients with chronic myocardial infarction. A regional metabolic study

    International Nuclear Information System (INIS)

    Fukai, Masumi; Ii, Masaaki; Nakakoji, Takahiro

    2000-01-01

    Angiographical demonstration of coronary collateral circulation may suggest the presence of residual viable myocardium. The development of coronary collaterals was judged according to Rentrop's classification in 37 patients with old anteroseptal myocardial infarction and 13 control patients with chest pain syndrome. The subjects with myocardial infarction were divided into 2 groups: 17 patients with the main branch of the left coronary artery clearly identified by collateral blood flow from the contralateral coronary artery [Coll (+) group, male/female 10/7, mean age 56.6 years] and 20 patients with obscure coronary trunk [Coll (-) group, male/female 16/4, mean age 54.9 years]. Thallium-201 myocardial scintigraphy and examination of local myocardial metabolism were carried out by measuring the flux of lactic acid under dipyridamole infusion load. Coronary stenosis of 99% or total occlusion was found in only 5 of 20 patients (25%) in the Coll (-) group but in 16 of 17 patients (94%) in the Coll (+) group (p<0.001). Redistribution of myocardial scintigraphy was found in 11 of 15 patients (73%) in the Coll (+) group, but only 3 of 18 patients (17%) in the Coll (-) group (p<0.01). The myocardial lactic acid extraction rate was -13.2±17.0% in the Coll (+) group, but 9.1±13.2% in the Coll (-) group (p<0.001). These results suggest that coronary collateral may contribute to minimizing the infarct area and to prediction of the presence of viable myocardium. (author)

  9. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    admin

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed .... Bank interest rate. TL. = Test length. VC. = Veterinary costs. The following assumptions were made to simulate the profit value and to create a comparable basis for statistical analyses:.

  10. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  11. A Theoretical Study on Quantitative Prediction and Evaluation of Thermal Residual Stresses in Metal Matrix Composite (Case 1 : Two-Dimensional In-Plane Fiber Distribution)

    International Nuclear Information System (INIS)

    Lee, Joon Hyun; Son, Bong Jin

    1997-01-01

    Although discontinuously reinforced metal matrix composite(MMC) is one of the most promising materials for applications of aerospace, automotive industries, the thermal residual stresses developed in the MMC due to the mismatch in coefficients of thermal expansion between the matrix and the fiber under a temperature change has been pointed out as one of the serious problem in practical applications. There are very limited nondestructive techniques to measure the residual stress of composite materials. However, many difficulties have been reported in their applications. Therefore it is important to establish analytical model to evaluate the thermal residual stress of MMC for practical engineering application. In this study, an elastic model is developed to predict the average thermal residual stresses in the matrix and fiber of a misoriented short fiber composite. The thermal residual stresses are induced by the mismatch in the coefficient of the thermal expansion of the matrix and fiber when the composite is subjected to a uniform temperature change. The model considers two-dimensional in-plane fiber misorientation. The analytical formulation of the model is based on Eshelby's equivalent inclusion method and is unique in that it is able to account for interactions among fibers. This model is more general than past models to investigate the effect of parameters which might influence thermal residual stress in composites. The present model is to investigate the effects of fiber volume fraction, distribution type, distribution cut-off angle, and aspect ratio on thermal residual stress for in-plane fiber misorientation. Fiber volume fraction, aspect ratio, and distribution cut-off angle are shown to have more significant effects on the magnitude of the thermal residual stresses than fiber distribution type for in-plane misorientation

  12. Residue contacts predicted by evolutionary covariance extend the application of ab initio molecular replacement to larger and more challenging protein folds.

    Science.gov (United States)

    Simkovic, Felix; Thomas, Jens M H; Keegan, Ronan M; Winn, Martyn D; Mayans, Olga; Rigden, Daniel J

    2016-07-01

    For many protein families, the deluge of new sequence information together with new statistical protocols now allow the accurate prediction of contacting residues from sequence information alone. This offers the possibility of more accurate ab initio (non-homology-based) structure prediction. Such models can be used in structure solution by molecular replacement (MR) where the target fold is novel or is only distantly related to known structures. Here, AMPLE, an MR pipeline that assembles search-model ensembles from ab initio structure predictions ('decoys'), is employed to assess the value of contact-assisted ab initio models to the crystallographer. It is demonstrated that evolutionary covariance-derived residue-residue contact predictions improve the quality of ab initio models and, consequently, the success rate of MR using search models derived from them. For targets containing β-structure, decoy quality and MR performance were further improved by the use of a β-strand contact-filtering protocol. Such contact-guided decoys achieved 14 structure solutions from 21 attempted protein targets, compared with nine for simple Rosetta decoys. Previously encountered limitations were superseded in two key respects. Firstly, much larger targets of up to 221 residues in length were solved, which is far larger than the previously benchmarked threshold of 120 residues. Secondly, contact-guided decoys significantly improved success with β-sheet-rich proteins. Overall, the improved performance of contact-guided decoys suggests that MR is now applicable to a significantly wider range of protein targets than were previously tractable, and points to a direct benefit to structural biology from the recent remarkable advances in sequencing.

  13. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    Science.gov (United States)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  14. Seeing the signs: Using the course of residual depressive symptomatology to predict patterns of relapse and recurrence of major depressive disorder.

    Science.gov (United States)

    Verhoeven, Floor E A; Wardenaar, Klaas J; Ruhé, Henricus G Eric; Conradi, Henk Jan; de Jonge, Peter

    2018-02-01

    Major depressive disorder (MDD) is characterized by high relapse/recurrence rates. Predicting individual patients' relapse/recurrence risk has proven hard, possibly due to course heterogeneity among patients. This study aimed to (1) identify homogeneous data-driven subgroups with different patterns of relapse/recurrence and (2) identify associated predictors. For a year, we collected weekly depressive symptom ratings in 213 primary care MDD patients. Latent class growth analyses (LCGA), based on symptom-severity during the 24 weeks after no longer fulfilling criteria for the initial major depressive episode (MDE), were used to identify groups with different patterns of relapse/recurrence. Associations of baseline predictors with these groups were investigated, as were the groups' associations with 3- and 11-year follow-up depression outcomes. LCGA showed that heterogeneity in relapse/recurrence after no longer fulfilling criteria for the initial MDE was best described by four classes: "quick symptom decline" (14.0%), "slow symptom decline" (23.3%), "steady residual symptoms" (38.7%), and "high residual symptoms" (24.1%). The latter two classes showed lower self-esteem at baseline, and more recurrences and higher severity at 3-year follow-up than the first two classes. Moreover, the high residual symptom class scored higher on neuroticism and lower on extraversion and self-esteem at baseline. Interestingly, the steady residual symptoms and high residual symptoms classes still showed higher severity of depressive symptoms after 11 years. Some measures were associated with specific patterns of relapse/recurrence. Moreover, the data-driven relapse/recurrence groups were predictive of long-term outcomes, suggesting that patterns of residual symptoms could be of prognostic value in clinical practice. © 2017 Wiley Periodicals, Inc.

  15. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....

  16. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  17. Residue contacts predicted by evolutionary covariance extend the application of ab initio molecular replacement to larger and more challenging protein folds

    Directory of Open Access Journals (Sweden)

    Felix Simkovic

    2016-07-01

    Full Text Available For many protein families, the deluge of new sequence information together with new statistical protocols now allow the accurate prediction of contacting residues from sequence information alone. This offers the possibility of more accurate ab initio (non-homology-based structure prediction. Such models can be used in structure solution by molecular replacement (MR where the target fold is novel or is only distantly related to known structures. Here, AMPLE, an MR pipeline that assembles search-model ensembles from ab initio structure predictions (`decoys', is employed to assess the value of contact-assisted ab initio models to the crystallographer. It is demonstrated that evolutionary covariance-derived residue–residue contact predictions improve the quality of ab initio models and, consequently, the success rate of MR using search models derived from them. For targets containing β-structure, decoy quality and MR performance were further improved by the use of a β-strand contact-filtering protocol. Such contact-guided decoys achieved 14 structure solutions from 21 attempted protein targets, compared with nine for simple Rosetta decoys. Previously encountered limitations were superseded in two key respects. Firstly, much larger targets of up to 221 residues in length were solved, which is far larger than the previously benchmarked threshold of 120 residues. Secondly, contact-guided decoys significantly improved success with β-sheet-rich proteins. Overall, the improved performance of contact-guided decoys suggests that MR is now applicable to a significantly wider range of protein targets than were previously tractable, and points to a direct benefit to structural biology from the recent remarkable advances in sequencing.

  18. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co...

  19. Residual deposits (residual soil)

    International Nuclear Information System (INIS)

    Khasanov, A.Kh.

    1988-01-01

    Residual soil deposits is accumulation of new formate ore minerals on the earth surface, arise as a result of chemical decomposition of rocks. As is well known, at the hyper genes zone under the influence of different factors (water, carbonic acid, organic acids, oxygen, microorganism activity) passes chemical weathering of rocks. Residual soil deposits forming depends from complex of geologic and climatic factors and also from composition and physical and chemical properties of initial rocks

  20. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    Science.gov (United States)

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also

  1. Analysis of Margin Index as a Method for Predicting Residual Disease After Breast-Conserving Surgery in a European Cancer Center.

    LENUS (Irish Health Repository)

    Bolger, Jarlath C

    2011-06-03

    INTRODUCTION: Breast-conserving surgery (BCS), followed by appropriate adjuvant therapies is established as a standard treatment option for women with early-stage invasive breast cancers. A number of factors have been shown to correlate with local and regional disease recurrence. Although margin status is a strong predictor of disease recurrence, consensus is yet to be established on the optimum margin necessary. Margenthaler et al. recently proposed the use of a "margin index," combining tumor size and margin status as a predictor of residual disease after BCS. We applied this new predictive tool to a population of patients with primary breast cancer who presented to a symptomatic breast unit to determine its suitability in predicting those who require reexcision surgery. METHODS: Retrospective analysis of our breast cancer database from January 1, 2000 to June 30, 2010 was performed, including all patients who underwent BCS. Of 531 patients who underwent BCS, 27.1% (144\\/531) required further reexcision procedures, and 55 were eligible for inclusion in the study. Margin index was calculated as: margin index = closest margin (mm)\\/tumor size (mm) × 100, with index >5 considered optimum. RESULTS: Of the 55 patients included, 31% (17\\/55) had residual disease. Fisher\\'s exact test showed margin index not to be a significant predictor of residual disease on reexcision specimen (P = 0.57). Of note, a significantly higher proportion of our patients presented with T2\\/3 tumors (60% vs. 38%). CONCLUSIONS: Although an apparently elegant tool for predicting residual disease after BCS, we have shown that it is not applicable to a symptomatic breast unit in Ireland.

  2. Analysis of margin index as a method for predicting residual disease after breast-conserving surgery in a European cancer center.

    LENUS (Irish Health Repository)

    Bolger, Jarlath C

    2012-02-01

    INTRODUCTION: Breast-conserving surgery (BCS), followed by appropriate adjuvant therapies is established as a standard treatment option for women with early-stage invasive breast cancers. A number of factors have been shown to correlate with local and regional disease recurrence. Although margin status is a strong predictor of disease recurrence, consensus is yet to be established on the optimum margin necessary. Margenthaler et al. recently proposed the use of a "margin index," combining tumor size and margin status as a predictor of residual disease after BCS. We applied this new predictive tool to a population of patients with primary breast cancer who presented to a symptomatic breast unit to determine its suitability in predicting those who require reexcision surgery. METHODS: Retrospective analysis of our breast cancer database from January 1, 2000 to June 30, 2010 was performed, including all patients who underwent BCS. Of 531 patients who underwent BCS, 27.1% (144\\/531) required further reexcision procedures, and 55 were eligible for inclusion in the study. Margin index was calculated as: margin index = closest margin (mm)\\/tumor size (mm) x 100, with index >5 considered optimum. RESULTS: Of the 55 patients included, 31% (17\\/55) had residual disease. Fisher\\'s exact test showed margin index not to be a significant predictor of residual disease on reexcision specimen (P = 0.57). Of note, a significantly higher proportion of our patients presented with T2\\/3 tumors (60% vs. 38%). CONCLUSIONS: Although an apparently elegant tool for predicting residual disease after BCS, we have shown that it is not applicable to a symptomatic breast unit in Ireland.

  3. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  4. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Estimating the Modified Allan Variance

    Science.gov (United States)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  6. Identification of family-specific residue packing motifs and their use for structure-based protein function prediction: I. Method development

    Science.gov (United States)

    Bandyopadhyay, Deepak; Huan, Jun; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander

    2009-11-01

    Protein function prediction is one of the central problems in computational biology. We present a novel automated protein structure-based function prediction method using libraries of local residue packing patterns that are common to most proteins in a known functional family. Critical to this approach is the representation of a protein structure as a graph where residue vertices (residue name used as a vertex label) are connected by geometrical proximity edges. The approach employs two steps. First, it uses a fast subgraph mining algorithm to find all occurrences of family-specific labeled subgraphs for all well characterized protein structural and functional families. Second, it queries a new structure for occurrences of a set of motifs characteristic of a known family, using a graph index to speed up Ullman's subgraph isomorphism algorithm. The confidence of function inference from structure depends on the number of family-specific motifs found in the query structure compared with their distribution in a large non-redundant database of proteins. This method can assign a new structure to a specific functional family in cases where sequence alignments, sequence patterns, structural superposition and active site templates fail to provide accurate annotation.

  7. Estimation of analysis and forecast error variances

    Directory of Open Access Journals (Sweden)

    Malaquias Peña

    2014-11-01

    Full Text Available Accurate estimates of error variances in numerical analyses and forecasts (i.e. difference between analysis or forecast fields and nature on the resolved scales are critical for the evaluation of forecasting systems, the tuning of data assimilation (DA systems and the proper initialisation of ensemble forecasts. Errors in observations and the difficulty in their estimation, the fact that estimates of analysis errors derived via DA schemes, are influenced by the same assumptions as those used to create the analysis fields themselves, and the presumed but unknown correlation between analysis and forecast errors make the problem difficult. In this paper, an approach is introduced for the unbiased estimation of analysis and forecast errors. The method is independent of any assumption or tuning parameter used in DA schemes. The method combines information from differences between forecast and analysis fields (‘perceived forecast errors’ with prior knowledge regarding the time evolution of (1 forecast error variance and (2 correlation between errors in analyses and forecasts. The quality of the error estimates, given the validity of the prior relationships, depends on the sample size of independent measurements of perceived errors. In a simulated forecast environment, the method is demonstrated to reproduce the true analysis and forecast error within predicted error bounds. The method is then applied to forecasts from four leading numerical weather prediction centres to assess the performance of their corresponding DA and modelling systems. Error variance estimates are qualitatively consistent with earlier studies regarding the performance of the forecast systems compared. The estimated correlation between forecast and analysis errors is found to be a useful diagnostic of the performance of observing and DA systems. In case of significant model-related errors, a methodology to decompose initial value and model-related forecast errors is also

  8. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  9. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  10. Construction of a predictive model for concentration of nickel and vanadium in vacuum residues of crude oils using artificial neural networks and LIBS.

    Science.gov (United States)

    Tarazona, José L; Guerrero, Jáder; Cabanzo, Rafael; Mejía-Ospino, E

    2012-03-01

    A predictive model to determine the concentration of nickel and vanadium in vacuum residues of Colombian crude oils using laser-induced breakdown spectroscopy (LIBS) and artificial neural networks (ANNs) with nodes distributed in multiple layers (multilayer perceptron) is presented. ANN inputs are intensity values in the vicinity of the emission lines 300.248, 301.200 and 305.081 nm of the Ni(I), and 309.310, 310.229, and 311.070 nm of the V(II). The effects of varying number of nodes and the initial weights and biases in the ANNs were systematically explored. Average relative error of calibration/prediction (REC/REP) and average relative standard deviation (RSD) metrics were used to evaluate the performance of the ANN in the prediction of concentrations of two elements studied here. © 2012 Optical Society of America

  11. A multicenter assessment of the ability of preoperative computed tomography scan and CA-125 to predict gross residual disease at primary debulking for advanced epithelial ovarian cancer.

    Science.gov (United States)

    Suidan, Rudy S; Ramirez, Pedro T; Sarasohn, Debra M; Teitcher, Jerrold B; Iyer, Revathy B; Zhou, Qin; Iasonos, Alexia; Denesopolis, John; Zivanovic, Oliver; Long Roche, Kara C; Sonoda, Yukio; Coleman, Robert L; Abu-Rustum, Nadeem R; Hricak, Hedvig; Chi, Dennis S

    2017-04-01

    To assess the ability of preoperative computed tomography scan and CA-125 to predict gross residual disease (RD) at primary cytoreduction in advanced ovarian cancer. A prospective, non-randomized, multicenter trial of patients who underwent primary debulking for stage III-IV epithelial ovarian cancer previously identified 9 criteria associated with suboptimal (>1cm residual) cytoreduction. This is a secondary post-hoc analysis looking at the ability to predict any RD. Four clinical and 18 radiologic criteria were assessed, and a multivariate model predictive of RD was developed. From 7/2001-12/2012, 350 patients met eligibility criteria. The complete gross resection rate was 33%. On multivariate analysis, 3 clinical and 8 radiologic criteria were significantly associated with the presence of any RD: age≥60years (OR=1.5); CA-125≥600U/mL (OR=1.3); ASA 3-4 (OR=1.6); lesions in the root of the superior mesenteric artery (OR=4.1), splenic hilum/ligaments (OR=1.4), lesser sac >1cm (OR=2.2), gastrohepatic ligament/porta hepatis (OR=1.4), gallbladder fossa/intersegmental fissure (OR=2); suprarenal retroperitoneal lymph nodes (OR=1.3); small bowel adhesions/thickening (OR=1.1); and moderate-severe ascites (OR=2.2). All ORs were significant with p<0.01. A 'predictive score' was assigned to each criterion based on its multivariate OR, and the rate of having any RD for patients who had a total score of 0-2, 3-5, 6-8, and ≥9 was 45%, 68%, 87%, and 96%, respectively. We identified 11 criteria associated with RD, and developed a predictive model in which the rate of having any RD was directly proportional to a predictive score. This model may be helpful in treatment planning. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Effect of Residual Stresses and Prediction of Possible Failure Mechanisms on Thermal Barrier Coating System by Finite Element Method

    Science.gov (United States)

    Ranjbar-Far, M.; Absi, J.; Mariaux, G.; Shahidi, S.

    2010-09-01

    This work is focused on the effect of the residual stresses resulting from the coating process and thermal cycling on the failure mechanisms within the thermal barrier coating (TBC) system. To reach this objective, we studied the effect of the substrate preheating and cooling rate on the coating process conditions. A new thermomechanical finite element model (FEM) considering a nonhomogeneous temperature distribution has been developed. In the results, we observed a critical stress corresponding to a low substrate temperature and high cooling rate during spraying of the top-coat material. Moreover, the analysis of the stress distribution after service shows that more critical stresses are obtained in the case where residual stresses are taken into account.

  13. Quantitative prediction of residual wetting film generated in mobilizing a two-phase liquid in a capillary model

    Directory of Open Access Journals (Sweden)

    Harsh Joshi

    2015-12-01

    Full Text Available This research studies the motion of immiscible two-phase liquid flow in a capillary tube through a numerical approach employing the volume of fluid method, for simulating the core-annular flow and water flooding in oil reservoirs of porous media. More specifically, the simulations are a representation of water flooding at a pore scale. A capillary tube model is established with ANSYS Fluent and verified. The numerical results matches well with the existing data available in the literature. Penetration of a less viscous liquid in a liquid of higher viscosity and the development of a residual wetting film of the higher viscosity liquid are thoroughly investigated. The effects of Capillary number, Reynolds Number and Viscosity ratio on the residual wetting film are studied in detail, as the thickness is directly related to the residual oil left in the porous media after water flooding. It should be noticed that the liquids considered in this research can be any liquids of different viscosity not necessarily oil and water. The results of this study can be used as guidance in the field of water flooding.

  14. Predicting the functionally distinct residues in the heme, cation, and substrate-binding sites of peroxidase from stress-tolerant mangrove specie, Avicennia marina.

    Science.gov (United States)

    Jabeen, Uzma; Abbasi, Atiya; Salim, Asmat

    2011-11-01

    Recent work was conducted to predict the structure of functionally distinct regions of Avicennia marina peroxidase (AP) by using the structural coordinates of barley grains peroxidase as the template. This enzyme is utilized by all living organisms in many biosynthetic or degradable processes and in defense against oxidative stress. The homology model showed some distinct structural changes in the heme, calcium, and substrate-binding regions. Val53 was found to be an important coordinating residue between distal calcium ion and the distal heme site while Ser176 is coordinated to the proximal histidine through Ala174 and Leu172. Different ionic and hydrogen-bonded interactions were also observed in AP. Analyses of various substrate-enzyme interactions revealed that the substrate-binding pocket is provided by the residues, His41, Phe70, Gly71, Asp138, His139, and Lys176; the later three residues are not conserved in the peroxidase family. We have also performed structural comparison of the A. marina peroxidase with that of two class III salt-sensitive species, peanut and soybean. Four loop regions were found to have largest structural deviation. The overall protein sequence was also analyzed for the presence of probable post-translational modification sites and the functional significance of these sites were outlined.

  15. Genomic dissection and prediction of feed intake and residual feed intake traits using a longitudinal model in F2 chickens

    DEFF Research Database (Denmark)

    Begli, Hakimeh Emamgholi; Torshizi, Rasoul vaez; Masoudi, Ali Akbar

    2017-01-01

    -density single nucleotide polymorphism (S N P ) genotypes , a nd to conduct a GWA study on longitudinal FI and residual feed intake (RFI) in a tot al of 312 chick ens with phenotype and genotype in the F 2 population. The GWA and GS studies reported in this pap er were conducted using β-spline random regression......Feed efficiency trait s (FETs) ar e import ant economic indicators in poultry production. Because feed intake (FI) is a time -dependent variable, longitudinal models can provide insights into the genetic basis of FET variation over time. It is expected that the application of longitudinal models...

  16. Structural model of a putrescine-cadaverine permease from Trypanosoma cruzi predicts residues vital for transport and ligand binding.

    Science.gov (United States)

    Soysa, Radika; Venselaar, Hanka; Poston, Jacqueline; Ullman, Buddy; Hasne, Marie-Pierre

    2013-06-15

    The TcPOT1.1 gene from Trypanosoma cruzi encodes a high affinity putrescine-cadaverine transporter belonging to the APC (amino acid/polyamine/organocation) transporter superfamily. No experimental three-dimensional structure exists for any eukaryotic member of the APC family, and thus the structural determinants critical for function of these permeases are unknown. To elucidate the key residues involved in putrescine translocation and recognition by this APC family member, a homology model of TcPOT1.1 was constructed on the basis of the atomic co-ordinates of the Escherichia coli AdiC arginine/agmatine antiporter crystal structure. The TcPOT1.1 homology model consisted of 12 transmembrane helices with the first ten helices organized in two V-shaped antiparallel domains with discontinuities in the helical structures of transmembrane spans 1 and 6. The model suggests that Trp241 and a Glu247-Arg403 salt bridge participate in a gating system and that Asn245, Tyr148 and Tyr400 contribute to the putrescine-binding pocket. To test the validity of the model, 26 site-directed mutants were created and tested for their ability to transport putrescine and to localize to the parasite cell surface. These results support the robustness of the TcPOT1.1 homology model and reveal the importance of specific aromatic residues in the TcPOT1.1 putrescine-binding pocket.

  17. A Wavelet Perspective on the Allan Variance.

    Science.gov (United States)

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.

  18. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  19. A comparison of equilibrium partitioning and critical body residue approaches for predicting toxicity of sediment-associated fluoranthene to freshwater amphipods

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, S.K. [Univ. of Michigan, Ann Arbor, MI (United States). Cooperative Inst. for Limnology and Ecosystem Research; Landrum, P.F. [National Oceanic and Atmospheric Administration, Ann Arbor, MI (United States). Great Lakes Environmental Research Lab.

    1997-10-01

    Equilibrium partitioning (EqP) theory predicts that the effects of organic compounds in sediments can be assessed by comparison of organic carbon-normalized sediment concentrations and estimated pore-water concentrations to effects determined in water-only exposures. A complementary approach, the critical body residue (CBR) theory, examines actual body burdens in relation to toxic effects. Critical body residue theory predicts that the narcotic effects of nonpolar compounds should be essentially constant for similar organisms, and narcosis should be observed at body burdens of 2 to 8 {micro}mol/g tissue. This study compares these two approaches for predicting toxicity of the polycyclic aromatic hydrocarbon (PAH) fluoranthene. The freshwater amphipods Hyalella azteca and Diporeia spp. were exposed for up to 30 d to sediment spiked with radiolabeled fluoranthene at concentrations of 0.1 (trace) to 3.940 nmol/g dry weight (= 346 {micro}mol/g organic carbon). Mean survival of Diporeia was generally high (>70%) and not significantly different from that of control animals. This result agrees with EqP predictions, because little mortality was observed for Diporeia in 10-d water-only exposures to fluoranthene in previous studies. After 10-d exposures, mortality of H. azteca was not significantly different from that of controls, even though measured interstitial water concentrations exceeded the previously determined 10-d water-only median lethal concentration (LC50). Equilibrium partitioning overpredicted fluoranthene sediment toxicity in this species. More mortality was observed for H. azteca at later time points, and a 16-d LC50 of 3.550 nmol/g dry weight sediment (291 {micro}mol/g organic carbon) was determined. A body burden of 1.10 {micro}mol fluoranthene-equivalents/g wet weight in H. azteca was associated with 50% mortality after 16-d exposures. Body burdens as high as 5.9 {micro}mol/g wet weight resulted in little mortality in Diporeia.

  20. Semi-supervised learning for genomic prediction of novel traits with small reference populations: an application to residual feed intake in dairy cattle.

    Science.gov (United States)

    Yao, Chen; Zhu, Xiaojin; Weigel, Kent A

    2016-11-07

    Genomic prediction for novel traits, which can be costly and labor-intensive to measure, is often hampered by low accuracy due to the limited size of the reference population. As an option to improve prediction accuracy, we introduced a semi-supervised learning strategy known as the self-training model, and applied this method to genomic prediction of residual feed intake (RFI) in dairy cattle. We describe a self-training model that is wrapped around a support vector machine (SVM) algorithm, which enables it to use data from animals with and without measured phenotypes. Initially, a SVM model was trained using data from 792 animals with measured RFI phenotypes. Then, the resulting SVM was used to generate self-trained phenotypes for 3000 animals for which RFI measurements were not available. Finally, the SVM model was re-trained using data from up to 3792 animals, including those with measured and self-trained RFI phenotypes. Incorporation of additional animals with self-trained phenotypes enhanced the accuracy of genomic predictions compared to that of predictions that were derived from the subset of animals with measured phenotypes. The optimal ratio of animals with self-trained phenotypes to animals with measured phenotypes (2.5, 2.0, and 1.8) and the maximum increase achieved in prediction accuracy measured as the correlation between predicted and actual RFI phenotypes (5.9, 4.1, and 2.4%) decreased as the size of the initial training set (300, 400, and 500 animals with measured phenotypes) increased. The optimal number of animals with self-trained phenotypes may be smaller when prediction accuracy is measured as the mean squared error rather than the correlation between predicted and actual RFI phenotypes. Our results demonstrate that semi-supervised learning models that incorporate self-trained phenotypes can achieve genomic prediction accuracies that are comparable to those obtained with models using larger training sets that include only animals with

  1. Effluent Free Radicals are Associated with Residual Renal Function and Predict Technique Failure in Peritoneal Dialysis Patients

    Science.gov (United States)

    Morinaga, Hiroshi; Sugiyama, Hitoshi; Inoue, Tatsuyuki; Takiue, Keiichi; Kikumoto, Yoko; Kitagawa, Masashi; Akagi, Shigeru; Nakao, Kazushi; Maeshima, Yohei; Miyazaki, Ikuko; Asanuma, Masato; Hiramatsu, Makoto; Makino, Hirofumi

    2012-01-01

    ♦ Objective: Residual renal function (RRF) is associated with low oxidative stress in peritoneal dialysis (PD). In the present study, we investigated the relationship between the impact of oxidative stress on RRF and patient outcomes during PD. ♦ Methods: Levels of free radicals (FRs) in effluent from the overnight dwell in 45 outpatients were determined by electron spin resonance spectrometry. The FR levels, clinical parameters, and the level of 8-hydroxy-2′-deoxyguanosine were evaluated at study start. The effects of effluent FR level on technique and patient survival were analyzed in a prospective cohort followed for 24 months. ♦ Results: Levels of effluent FRs showed significant negative correlations with daily urine volume and residual renal Kt/V, and positive correlations with plasma β2-microglobulin and effluent 8-hydroxy-2′-deoxyguanosine. A highly significant difference in technique survival (p < 0.05), but not patient survival, was observed for patients grouped by effluent FR quartile. The effluent FR level was independently associated with technique failure after adjusting for patient age, history of cardiovascular disease, and presence of diabetes mellitus (p < 0.001). The level of effluent FRs was associated with death-censored technique failure in both univariate (p < 0.001) and multivariate (p < 0.01) hazard models. Compared with patients remaining on PD, those withdrawn from the modality had significantly higher levels of effluent FRs (p < 0.005). ♦ Conclusions: Elevated effluent FRs are associated with RRF and technique failure in stable PD patients. These findings highlight the importance of oxidative stress as an unfavorable prognostic factor in PD and emphasize that steps should be taken to minimize oxidative stress in these patients. PMID:22215657

  2. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  3. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  4. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  5. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  6. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  7. The VIX, the Variance Premium, and Expected Returns*

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J Eduardo

    2018-01-01

    . These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...

  8. High Residual Collagen-Induced Platelet Reactivity Predicts Development of Restenosis in the Superficial Femoral Artery After Percutaneous Transluminal Angioplasty in Claudicant Patients

    Energy Technology Data Exchange (ETDEWEB)

    Gary, Thomas, E-mail: thomas.gary@medunigraz.at [Medical University of Graz, Division of Angiology, Department of Internal Medicine (Austria); Prüller, Florian, E-mail: florian.prueller@klinikum-graz.at; Raggam, Reinhard, E-mail: reinhard.raggam@klinikum-graz.at [Medical University of Graz, Clinical Institute of Medical and Chemical Laboratory Diagnostics (Austria); Mahla, Elisabeth, E-mail: elisabeth.mahla@medunigraz.at [Medical University of Graz, Department of Anesthesiology and Intensive Care Medicine (Austria); Eller, Philipp, E-mail: philipp.eller@medunigraz.at; Hafner, Franz, E-mail: franz.hafner@klinikum-graz.at; Brodmann, Marianne, E-mail: marianne.brodmann@medunigraz.at [Medical University of Graz, Division of Angiology, Department of Internal Medicine (Austria)

    2016-02-15

    PurposeAlthough platelet reactivity is routinely inhibited with aspirin after percutaneous angioplasty (PTA) in peripheral arteries, the restenosis rate in the superficial femoral artery (SFA) is high. Interaction of activated platelets and the endothelium in the region of intervention could be one reason for this as collagen in the subendothelium activates platelets.Materials and MethodsA prospective study evaluating on-site platelet reactivity during PTA and its influence on the development of restenosis with a total of 30 patients scheduled for PTA of the SFA. Arterial blood was taken from the PTA site after SFA; platelet function was evaluated with light transmission aggregometry. After 3, 6, 12, and 24 months, duplex sonography was performed and the restenosis rate evaluated.ResultsEight out of 30 patients developed a hemodynamically relevant restenosis (>50 % lumen narrowing) in the PTA region during the 24-month follow-up period. High residual collagen-induced platelet reactivity defined as AUC >30 was a significant predictor for the development of restenosis [adjusted odds ratio 11.8 (9.4, 14.2); P = .04].ConclusionsHigh residual collagen-induced platelet reactivity at the interventional site predicts development of restenosis after PTA of the SFA. Platelet function testing may be useful for identifying patients at risk.

  9. High Residual Collagen-Induced Platelet Reactivity Predicts Development of Restenosis in the Superficial Femoral Artery After Percutaneous Transluminal Angioplasty in Claudicant Patients.

    Science.gov (United States)

    Gary, Thomas; Prüller, Florian; Raggam, Reinhard; Mahla, Elisabeth; Eller, Philipp; Hafner, Franz; Brodmann, Marianne

    2016-02-01

    Although platelet reactivity is routinely inhibited with aspirin after percutaneous angioplasty (PTA) in peripheral arteries, the restenosis rate in the superficial femoral artery (SFA) is high. Interaction of activated platelets and the endothelium in the region of intervention could be one reason for this as collagen in the subendothelium activates platelets. A prospective study evaluating on-site platelet reactivity during PTA and its influence on the development of restenosis with a total of 30 patients scheduled for PTA of the SFA. Arterial blood was taken from the PTA site after SFA; platelet function was evaluated with light transmission aggregometry. After 3, 6, 12, and 24 months, duplex sonography was performed and the restenosis rate evaluated. Eight out of 30 patients developed a hemodynamically relevant restenosis (>50 % lumen narrowing) in the PTA region during the 24-month follow-up period. High residual collagen-induced platelet reactivity defined as AUC >30 was a significant predictor for the development of restenosis [adjusted odds ratio 11.8 (9.4, 14.2); P = .04]. High residual collagen-induced platelet reactivity at the interventional site predicts development of restenosis after PTA of the SFA. Platelet function testing may be useful for identifying patients at risk.

  10. Predictive role of minimal residual disease and log clearance in acute myeloid leukemia: a comparison between multiparameter flow cytometry and Wilm's tumor 1 levels.

    Science.gov (United States)

    Rossi, Giovanni; Minervini, Maria Marta; Melillo, Lorella; di Nardo, Francesco; de Waure, Chiara; Scalzulli, Potito Rosario; Perla, Gianni; Valente, Daniela; Sinisi, Nicola; Cascavilla, Nicola

    2014-07-01

    In acute myeloid leukemia (AML), the detection of minimal residual disease (MRD) as well as the degree of log clearance similarly identifies patients with poor prognosis. No comparison was provided between the two approaches in order to identify the best one to monitor follow-up patients. In this study, MRD and clearance were assessed by both multiparameter flow cytometry (MFC) and WT1 expression at different time points on 45 AML patients achieving complete remission. Our results by WT1 expression showed that log clearance lower than 1.96 after induction predicted the recurrence better than MRD higher than 77.0 copies WT1/10(4) ABL. Conversely, on MFC, MRD higher than 0.2 % after consolidation was more predictive than log clearance below 2.64. At univariate and multivariate analysis, positive MRD values and log clearance below the optimal cutoffs were associated with a shorter disease-free survival (DFS). At the univariate analysis, positive MRD values were also associated with overall survival (OS). Therefore, post-induction log clearance by WT1 and post-consolidation MRD by MFC represented the most informative approaches to identify the relapse. At the optimal timing of assessment, positive MRD and log-clearance values lower than calculated thresholds similarly predicted an adverse prognosis in AML.

  11. Electrostatic contribution of surface charge residues to the stability of a thermophilic protein: benchmarking experimental and predicted pKa values.

    Directory of Open Access Journals (Sweden)

    Chi-Ho Chan

    Full Text Available Optimization of the surface charges is a promising strategy for increasing thermostability of proteins. Electrostatic contribution of ionizable groups to the protein stability can be estimated from the differences between the pKa values in the folded and unfolded states of a protein. Using this pKa-shift approach, we experimentally measured the electrostatic contribution of all aspartate and glutamate residues to the stability of a thermophilic ribosomal protein L30e from Thermococcus celer. The pKa values in the unfolded state were found to be similar to model compound pKas. The pKa values in both the folded and unfolded states obtained at 298 and 333 K were similar, suggesting that electrostatic contribution of ionizable groups to the protein stability were insensitive to temperature changes. The experimental pKa values for the L30e protein in the folded state were used as a benchmark to test the robustness of pKa prediction by various computational methods such as H++, MCCE, MEAD, pKD, PropKa, and UHBD. Although the predicted pKa values were affected by crystal contacts that may alter the side-chain conformation of surface charged residues, most computational methods performed well, with correlation coefficients between experimental and calculated pKa values ranging from 0.49 to 0.91 (p<0.01. The changes in protein stability derived from the experimental pKa-shift approach correlate well (r = 0.81 with those obtained from stability measurements of charge-to-alanine substituted variants of the L30e protein. Our results demonstrate that the knowledge of the pKa values in the folded state provides sufficient rationale for the redesign of protein surface charges leading to improved protein stability.

  12. RNABindRPlus: a predictor that combines machine learning and sequence homology-based methods to improve the reliability of predicted RNA-binding residues in proteins.

    Science.gov (United States)

    Walia, Rasna R; Xue, Li C; Wilkins, Katherine; El-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant

    2014-01-01

    Protein-RNA interactions are central to essential cellular processes such as protein synthesis and regulation of gene expression and play roles in human infectious and genetic diseases. Reliable identification of protein-RNA interfaces is critical for understanding the structural bases and functional implications of such interactions and for developing effective approaches to rational drug design. Sequence-based computational methods offer a viable, cost-effective way to identify putative RNA-binding residues in RNA-binding proteins. Here we report two novel approaches: (i) HomPRIP, a sequence homology-based method for predicting RNA-binding sites in proteins; (ii) RNABindRPlus, a new method that combines predictions from HomPRIP with those from an optimized Support Vector Machine (SVM) classifier trained on a benchmark dataset of 198 RNA-binding proteins. Although highly reliable, HomPRIP cannot make predictions for the unaligned parts of query proteins and its coverage is limited by the availability of close sequence homologs of the query protein with experimentally determined RNA-binding sites. RNABindRPlus overcomes these limitations. We compared the performance of HomPRIP and RNABindRPlus with that of several state-of-the-art predictors on two test sets, RB44 and RB111. On a subset of proteins for which homologs with experimentally determined interfaces could be reliably identified, HomPRIP outperformed all other methods achieving an MCC of 0.63 on RB44 and 0.83 on RB111. RNABindRPlus was able to predict RNA-binding residues of all proteins in both test sets, achieving an MCC of 0.55 and 0.37, respectively, and outperforming all other methods, including those that make use of structure-derived features of proteins. More importantly, RNABindRPlus outperforms all other methods for any choice of tradeoff between precision and recall. An important advantage of both HomPRIP and RNABindRPlus is that they rely on readily available sequence and sequence

  13. RNABindRPlus: A Predictor that Combines Machine Learning and Sequence Homology-Based Methods to Improve the Reliability of Predicted RNA-Binding Residues in Proteins

    Science.gov (United States)

    Walia, Rasna R.; Xue, Li C.; Wilkins, Katherine; El-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant

    2014-01-01

    Protein-RNA interactions are central to essential cellular processes such as protein synthesis and regulation of gene expression and play roles in human infectious and genetic diseases. Reliable identification of protein-RNA interfaces is critical for understanding the structural bases and functional implications of such interactions and for developing effective approaches to rational drug design. Sequence-based computational methods offer a viable, cost-effective way to identify putative RNA-binding residues in RNA-binding proteins. Here we report two novel approaches: (i) HomPRIP, a sequence homology-based method for predicting RNA-binding sites in proteins; (ii) RNABindRPlus, a new method that combines predictions from HomPRIP with those from an optimized Support Vector Machine (SVM) classifier trained on a benchmark dataset of 198 RNA-binding proteins. Although highly reliable, HomPRIP cannot make predictions for the unaligned parts of query proteins and its coverage is limited by the availability of close sequence homologs of the query protein with experimentally determined RNA-binding sites. RNABindRPlus overcomes these limitations. We compared the performance of HomPRIP and RNABindRPlus with that of several state-of-the-art predictors on two test sets, RB44 and RB111. On a subset of proteins for which homologs with experimentally determined interfaces could be reliably identified, HomPRIP outperformed all other methods achieving an MCC of 0.63 on RB44 and 0.83 on RB111. RNABindRPlus was able to predict RNA-binding residues of all proteins in both test sets, achieving an MCC of 0.55 and 0.37, respectively, and outperforming all other methods, including those that make use of structure-derived features of proteins. More importantly, RNABindRPlus outperforms all other methods for any choice of tradeoff between precision and recall. An important advantage of both HomPRIP and RNABindRPlus is that they rely on readily available sequence and sequence

  14. Residual stresses in material processing

    Science.gov (United States)

    Kozaczek, K. J.; Watkins, T. R.; Hubbard, C. R.; Wang, Xun-Li; Spooner, S.

    Material manufacturing processes often introduce residual stresses into the product. The residual stresses affect the properties of the material and often are detrimental. Therefore, the distribution and magnitude of residual stresses in the final product are usually an important factor in manufacturing process optimization or component life prediction. The present paper briefly discusses the causes of residual stresses. It then addresses the direct, nondestructive methods of residual stress measurement by X ray and neutron diffraction. Examples are presented to demonstrate the importance of residual stress measurement in machining and joining operations.

  15. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  16. Pharmacokinetics, efficacy prediction indexes, and residue depletion of ribavirin in Atlantic salmon's (Salmo salar) muscle after oral administration in feed.

    Science.gov (United States)

    San Martín, B; Muñoz, R; Cornejo, J; Martínez, M A; Araya-Jordán, C; Maddaleno, A; Anadón, A

    2016-08-01

    Ribavirin is an antiviral used in human medicine, but it has not been authorized for use in veterinary medicine although it is effective against infectious salmon anemia (ISA) virus, between others. In this study, we present a pharmacokinetic profile of ribavirin in Atlantic salmon (Salmo salar), efficacy prediction indexes, and the measure of its withdrawal time. To determine the pharmacokinetic profile, fishes were orally administered with a single ribavirin dose of 1.6 mg/kg bw, and then, plasma concentrations were measured at different times. From the time-vs.-concentration curve, Cmax = 413.57 ng/mL, Tmax  = 6.96 h, AUC = 21394.01 μg·h/mL, t1/2  = 81.61 h, and K10  = 0.0421/h were obtained. Ribavirin reached adequate concentrations during the pharmacokinetic study, with prediction indexes of Cmax /IC50  = 20.7, AUC/IC50  = 1069.7, and T>IC50  = 71 h, where IC is the inhibitory concentration 50%. For ribavirin depletion study, fishes were orally administered with a dairy dose of 1.6 mg/kg bw during 10 days. Concentrations were measured on edible tissue on different days post-treatment. A linear regression of the time vs. concentration was conducted, obtaining a withdrawal time of 1966 °C days. Results obtained reveal that the dose of 1.6 mg/kg bw orally administered is effective for ISA virus, originating a reasonable withdrawal period within the productive schedules of Atlantic salmon. © 2016 John Wiley & Sons Ltd.

  17. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  18. Using regression heteroscedasticity to model trends in the mean and variance of floods

    Science.gov (United States)

    Hecht, Jory; Vogel, Richard

    2015-04-01

    Changes in the frequency of extreme floods have been observed and anticipated in many hydrological settings in response to numerous drivers of environmental change, including climate, land cover, and infrastructure. To help decision-makers design flood control infrastructure in settings with non-stationary hydrological regimes, a parsimonious approach for detecting and modeling trends in extreme floods is needed. An approach using ordinary least squares (OLS) to fit a heteroscedastic regression model can accommodate nonstationarity in both the mean and variance of flood series while simultaneously offering a means of (i) analytically evaluating type I and type II trend detection errors, (ii) analytically generating expressions of uncertainty, such as confidence and prediction intervals, (iii) providing updated estimates of the frequency of floods exceeding the flood of record, (iv) accommodating a wide range of non-linear functions through ladder of powers transformations, and (v) communicating hydrological changes in a single graphical image. Previous research has shown that the two-parameter lognormal distribution can adequately model the annual maximum flood distribution of both stationary and non-stationary hydrological regimes in many regions of the United States. A simple logarithmic transformation of annual maximum flood series enables an OLS heteroscedastic regression modeling approach to be especially suitable for creating a non-stationary flood frequency distribution with parameters that are conditional upon time or physically meaningful covariates. While heteroscedasticity is often viewed as an impediment, we document how detecting and modeling heteroscedasticity presents an opportunity for characterizing both the conditional mean and variance of annual maximum floods. We introduce an approach through which variance trend models can be analytically derived from the behavior of residuals of the conditional mean flood model. Through case studies of

  19. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  20. Prediction of residual lung function after lung surgery, and examination of blood perfusion in the pre- and postoperative lung using three-dimensional SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Shimatani, Shinji [Toho Univ., Tokyo (Japan). School of Medicine

    2001-01-01

    side residual lung. Our findings indicate that 3-D imaging volume, as determined by the volume rendering method at the blood perfusion threshold with {sup 99m}Tc-MAA lung perfusion SPECT, is useful in both the prediction of pulmonary function after lung surgery and the examination of changes in blood perfusion. (author)

  1. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  2. WALS Prediction

    NARCIS (Netherlands)

    Magnus, J.R.; Wang, W.; Zhang, Xinyu

    2012-01-01

    Abstract: Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty

  3. Residue analysis of a CTL epitope of SARS-CoV spike protein by IFN-gamma production and bioinformatics prediction

    Directory of Open Access Journals (Sweden)

    Huang Jun

    2012-09-01

    Full Text Available Abstract Background Severe acute respiratory syndrome (SARS is an emerging infectious disease caused by the novel coronavirus SARS-CoV. The T cell epitopes of the SARS CoV spike protein are well known, but no systematic evaluation of the functional and structural roles of each residue has been reported for these antigenic epitopes. Analysis of the functional importance of side-chains by mutational study may exaggerate the effect by imposing a structural disturbance or an unusual steric, electrostatic or hydrophobic interaction. Results We demonstrated that N50 could induce significant IFN-gamma response from SARS-CoV S DNA immunized mice splenocytes by the means of ELISA, ELISPOT and FACS. Moreover, S366-374 was predicted to be an optimal epitope by bioinformatics tools: ANN, SMM, ARB and BIMAS, and confirmed by IFN-gamma response induced by a series of S358-374-derived peptides. Furthermore, each of S366-374 was replaced by alanine (A, lysine (K or aspartic acid (D, respectively. ANN was used to estimate the binding affinity of single S366-374 mutants to H-2 Kd. Y367 and L374 were predicated to possess the most important role in peptide binding. Additionally, these one residue mutated peptides were synthesized, and IFN-gamma production induced by G368, V369, A371, T372 and K373 mutated S366-374 were decreased obviously. Conclusions We demonstrated that S366-374 is an optimal H-2 Kd CTL epitope in the SARS CoV S protein. Moreover, Y367, S370, and L374 are anchors in the epitope, while C366, G368, V369, A371, T372, and K373 may directly interact with TCR on the surface of CD8-T cells.

  4. Changes in serum CA-125 can predict optimal cytoreduction to no gross residual disease in patients with advanced stage ovarian cancer treated with neoadjuvant chemotherapy.

    Science.gov (United States)

    Rodriguez, Noah; Rauh-Hain, J Alejandro; Shoni, Melina; Berkowitz, Ross S; Muto, Michael G; Feltmate, Colleen; Schorge, John O; Del Carmen, Marcela G; Matulonis, Ursula A; Horowitz, Neil S

    2012-05-01

    To evaluate the predictive power of serum CA-125 changes in the management of patients undergoing neoadjuvant chemotherapy followed by interval debulking surgery (NACT-IDS) for a new diagnosis of epithelial ovarian carcinoma (EOC). Using the Cancer Registry databases from our institutions, a retrospective review of patients with FIGO stage IIIC and IV EOC who were treated with platinum-based NACT-IDS between January 2006 and December 2009 was conducted. Demographic data, CA-125 levels, radiographic data, chemotherapy, and surgical-pathologic information were obtained. Continuous variables were evaluated by Student's t test or Wilcoxon-Mann-Whitney test. One hundred-three patients with stage IIIC or IV EOC met study criteria. Median number of neoadjuvant cycles was 3. Ninety-nine patients (96.1%) were optimally cytoreduced. Forty-seven patients (47.5%) had resection to no residual disease (NRD). The median CA-125 at diagnosis and before interval debulking was 1749U/mL and 161U/mL, respectively. Comparing patients with NRD v. optimal macroscopic disease (OMD), there was no statistical difference in the mean CA-125 at diagnosis (1566U/mL v. 2077U/mL, p=0.1). There was a significant difference in the mean CA-125 prior to interval debulking, 92 v. 233U/mL (p=0.001). In the NRD group, 38 patients (80%) had preoperative CA-125≤100U/mL compared to 33 patients (63.4%) in the OMD group (p=0.04). Patients who undergo NACT-IDS achieve a high rate of optimal cytoreduction. In our series, after treatment with taxane and platinum-based chemotherapy, patients with a preoperative CA-125 of ≤100U/mL were highly likely to be cytoreduced to no residual disease. Copyright © 2012. Published by Elsevier Inc.

  5. The Achilles Heel of Normal Determinations via Minimum Variance Techniques: Worldline Dependencies

    Science.gov (United States)

    Ma, Z.; Scudder, J. D.; Omidi, N.

    2002-12-01

    Time series of data collected across current layers are usually organized by divining coordinate transformations (as from minimum variance) that permits a geometrical interpretation for the data collected. Almost without exception the current layer geometry is inferred by supposing that the current carrying layer is locally planar. Only after this geometry is ``determined'' can the various quantities predicted by theory calculated. The precision of reconnection rated ``measured'' and the quantitative support for or against component reconnection be evaluated. This paper defines worldline traversals across fully resolved Hall two fluid models of reconnecting current sheets (with varying sizes of guide fields) and across a 2-D hybrid solution of a super critical shock layer. Along each worldline various variance techniques are used to infer current sheet normals based on the data observed along this worldline alone. We then contrast these inferred normals with those known from the overview of the fully resolved spatial pictures of the layer. Absolute errors of 20 degrees in the normal are quite commonplace, but errors of 40-90 deg are also implied, especially for worldlines that make more and more oblique angles to the true current sheet normal. These mistaken ``inferences'' are traceable to the degree that the data collected sample 2-D variations within these layers or not. While it is not surprising that these variance techniques give incorrect errors in the presence of layers that possess 2-D variations, it is illuminating that such large errors need not be signalled by the traditional error formulae for the error cones on normals that have been previously used to estimate the errors of normal choices. Frequently the absolute errors that depend on worldline path can be 10 times the random error that formulae would predict based on eigenvalues of the covariance matrix. A given time series cannot be associated in any a priori way with a specific worldline

  6. Residuation theory

    CERN Document Server

    Blyth, T S; Sneddon, I N; Stark, M

    1972-01-01

    Residuation Theory aims to contribute to literature in the field of ordered algebraic structures, especially on the subject of residual mappings. The book is divided into three chapters. Chapter 1 focuses on ordered sets; directed sets; semilattices; lattices; and complete lattices. Chapter 2 tackles Baer rings; Baer semigroups; Foulis semigroups; residual mappings; the notion of involution; and Boolean algebras. Chapter 3 covers residuated groupoids and semigroups; group homomorphic and isotone homomorphic Boolean images of ordered semigroups; Dubreil-Jacotin and Brouwer semigroups; and loli

  7. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    process. Residuals are ascribed to locations in the empty background, as well as to data points of the point pattern. We obtain variance formulae, and study standardised residuals. There is also an analogy between our spatial residuals and the usual residuals for (non-spatial) generalised linear models...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  8. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...

  9. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  10. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  11. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is revoking twenty-four (24) obsolete variances...

  12. Measurement Error Variance of Test-Day Obervations from Automatic Milking Systems

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik S

    2012-01-01

    Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined in the evaluat......Automated milking systems (AMS) are becoming more popular in dairy farms. In this paper we present an approach for estimation of residual error covariance matrices for AMS and conventional milking system (CMS) observations. The variances for other random effects are kept as defined...

  13. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  14. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts......: First, exposure to bond market volatility is strongly priced with a Sharpe ratio of -1.8, 20% higher than what is observed in the equity market. Second, while there is strong co-movement between equity and bond market variance risk, there are distinct periods when the bond variance risk premium...... is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models....

  15. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    Science.gov (United States)

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2017-12-11

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of

  16. Genetic and Environmental Variance Among F2 Families in a Commercial Breeding Program for Perennial Ryegrass (Lolium perenne L.)

    DEFF Research Database (Denmark)

    Fé, Dario; Greve-Pedersen, Morten; Jensen, Christian Sig

    2013-01-01

    greenhouse emissions and nitrogen loss. GWS model building includes 1) development of a robust quantitative genotyping method for an outcrossing species, 2) tailoring of multi-locational, multi-annual phenotype data, 3) association analysis and development of prediction models. As part of (2) the aim...... (parents), repeated effect of the same family and residual error. Results showed the presence of a significant genetic variance among the random factors, indicating the existence of a considerably high variance in the commercial population. This will provide good opportunities for future improvement...... programs based on GWS. Future work will focus on developing association models based on tailored phenotype data and genotype-by-sequencing-derived allele frequencies....

  17. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  18. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  19. Importance Sampling Variance Reduction in GRESS ATMOSIM

    Energy Technology Data Exchange (ETDEWEB)

    Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-26

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  20. The variance of two game tree algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yanjun [Southern Methodist Univ., Dallas, TX (United States)

    1997-06-01

    This paper studies the variance of two game tree algorithms {alpha}-{beta} search and SCOUT, in the stochastic i.i.d. model. The problem of determining the variance of the classic {alpha}-{beta} search algorithm in the i.i.d. model has been long open. This paper resolves this problem partially. It is shown, by the martingale method, that the standard deviation of the weaker {alpha}-{beta} search without deep cutoffs is of the same order as the expected number of leaves evaluated. A nearly-optimal upper bound on the variance of the general {alpha}-{beta} search is obtained, and this upper bound yields an optimal bound if the current upper bound on the expected number of leaves evaluated by {alpha}-{beta} search can be improved. A thorough treatment of the two-pass SCOUT algorithm is presented. The variance of the SCOUT algorithm is determined.

  1. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  2. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  3. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  4. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  5. Residue processing

    Energy Technology Data Exchange (ETDEWEB)

    Gieg, W.; Rank, V.

    1942-10-15

    In the first stage of coal hydrogenation, the liquid phase, light and heavy oils were produced; the latter containing the nonliquefied parts of the coal, the coal ash, and the catalyst substances. It was the problem of residue processing to extract from these so-called let-down oils that which could be used as pasting oils for the coal. The object was to obtain a maximum oil extraction and a complete removal of the solids, because of the latter were returned to the process they would needlessly burden the reaction space. Separation of solids in residue processing could be accomplished by filtration, centrifugation, extraction, distillation, or low-temperature carbonization (L.T.C.). Filtration or centrifugation was most suitable since a maximum oil yield could be expected from it, since only a small portion of the let-down oil contained in the filtration or centrifugation residue had to be thermally treated. The most satisfactory centrifuge at this time was the Laval, which delivered liquid centrifuge residue and centrifuge oil continuously. By comparison, the semi-continuous centrifuges delivered plastic residues which were difficult to handle. Various apparatus such as the spiral screw kiln and the ball kiln were used for low-temperature carbonization of centrifuge residues. Both were based on the idea of carbonization in thin layers. Efforts were also being made to produce electrode carbon and briquette binder as by-products of the liquid coal phase.

  6. Selection for uniformity in livestock by exploiting genetic heterogeneity of environmental variance

    NARCIS (Netherlands)

    Mulder, H.A.; Bijma, P.; Hill, W.G.

    2008-01-01

    In some situations, it is worthwhile to change not only the mean, but also the variability of traits by selection. Genetic variation in residual variance may be utilised to improve uniformity in livestock populations by selection. The objective was to investigate the effects of genetic parameters,

  7. Inheritance of dermatoglyphic traits in twins: univariate and bivariate variance decomposition analysis.

    Science.gov (United States)

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2012-01-01

    Dermatoglyphic traits in a sample of twins were analyzed to estimate the resemblance between MZ and DZ twins and to evaluate the mode of inheritance by using the maximum likelihood-based Variance decomposition analysis. The additive genetic variance component was significant in both sexes for four traits--PII, AB_RC, RC_HB, and ATD_L. AB RC and RC_HB had significant sex differences in means, whereas PII and ATD_L did not. The results of the Bivariate Variance decomposition analysis revealed that PII and RC_HB have a significant correlation in both genetic and residual components. Significant correlation in the additive genetic variance between AB_RC and ATD_L was observed. The same analysis only for the females sub-sample in the three traits RBL, RBR and AB_DIS shows that the additive genetic RBR component was significant and the AB_DIS sibling component was not significant while others cannot be constrained to zero. The additive variance for AB DIS sibling component was not significant. The three components additive, sibling and residual were significantly correlated between each pair of traits revealed by the Bivariate Variance decomposition analysis.

  8. Relationship between Allan variances and Kalman Filter parameters

    Science.gov (United States)

    Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

    1984-01-01

    A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

  9. A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors

    NARCIS (Netherlands)

    Schoups, G.; Vrugt, J.A.

    2010-01-01

    Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance.

  10. The influence of mean climate trends and climate variance on beaver survival and recruitment dynamics.

    Science.gov (United States)

    Campbell, Ruairidh D; Nouvellet, Pierre; Newman, Chris; Macdonald, David W; Rosell, Frank

    2012-09-01

    Ecologists are increasingly aware of the importance of environmental variability in natural systems. Climate change is affecting both the mean and the variability in weather and, in particular, the effect of changes in variability is poorly understood. Organisms are subject to selection imposed by both the mean and the range of environmental variation experienced by their ancestors. Changes in the variability in a critical environmental factor may therefore have consequences for vital rates and population dynamics. Here, we examine ≥90-year trends in different components of climate (precipitation mean and coefficient of variation (CV); temperature mean, seasonal amplitude and residual variance) and consider the effects of these components on survival and recruitment in a population of Eurasian beavers (n = 242) over 13 recent years. Within climatic data, no trends in precipitation were detected, but trends in all components of temperature were observed, with mean and residual variance increasing and seasonal amplitude decreasing over time. A higher survival rate was linked (in order of influence based on Akaike weights) to lower precipitation CV (kits, juveniles and dominant adults), lower residual variance of temperature (dominant adults) and lower mean precipitation (kits and juveniles). No significant effects were found on the survival of nondominant adults, although the sample size for this category was low. Greater recruitment was linked (in order of influence) to higher seasonal amplitude of temperature, lower mean precipitation, lower residual variance in temperature and higher precipitation CV. Both climate means and variance, thus proved significant to population dynamics; although, overall, components describing variance were more influential than those describing mean values. That environmental variation proves significant to a generalist, wide-ranging species, at the slow end of the slow-fast continuum of life histories, has broad implications for

  11. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  12. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  13. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  14. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    -based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility......We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel...

  15. On l1 Mean and Variance Filtering

    OpenAIRE

    Wahlberg, Bo; Rojas, Cristian R.; Annergren, Mariette

    2011-01-01

    This paper addresses the problem of segmenting a time-series with respect to changes in the mean value or in the variance. The first case is when the time data is modeled as a sequence of independent and normal distributed random variables with unknown, possibly changing, mean value but fixed variance. The main assumption is that the mean value is piecewise constant in time, and the task is to estimate the change times and the mean values within the segments. The second case is when the mean ...

  16. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  17. Residual risk

    African Journals Online (AJOL)

    ing the residual risk of transmission of HIV by blood transfusion. An epidemiological approach assumed that all HIV infections detected serologically in first-time donors were pre-existing or prevalent infections, and that all infections detected in repeat blood donors were new or incident infections. During 1986 - 1987,0,012%.

  18. Variance decomposition using an IRT measurement model.

    NARCIS (Netherlands)

    van den Berg, S.M.; Glas, C.A.W.; Boomsma, D.I.

    2007-01-01

    Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects' sum scores on the items are computed, and the variance of sum scores is decomposed into a

  19. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  20. Understanding gender variance in children and adolescents.

    Science.gov (United States)

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. Copyright 2014, SLACK Incorporated.

  1. Linear transformations of variance/covariance matrices

    NARCIS (Netherlands)

    Parois, P.J.A.; Lutz, M.

    2011-01-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

  2. Variance adaptation in navigational decision making

    Science.gov (United States)

    Gershow, Marc; Gepner, Ruben; Wolk, Jason; Wadekar, Digvijay

    Drosophila larvae navigate their environments using a biased random walk strategy. A key component of this strategy is the decision to initiate a turn (change direction) in response to declining conditions. We modeled this decision as the output of a Linear-Nonlinear-Poisson cascade and used reverse correlation with visual and fictive olfactory stimuli to find the parameters of this model. Because the larva responds to changes in stimulus intensity, we used stimuli with uncorrelated normally distributed intensity derivatives, i.e. Brownian processes, and took the stimulus derivative as the input to our LNP cascade. In this way, we were able to present stimuli with 0 mean and controlled variance. We found that the nonlinear rate function depended on the variance in the stimulus input, allowing larvae to respond more strongly to small changes in low-noise compared to high-noise environments. We measured the rate at which the larva adapted its behavior following changes in stimulus variance, and found that larvae adapted more quickly to increases in variance than to decreases, consistent with the behavior of an optimal Bayes estimator. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  3. Controlling production variances in complex business processes

    NARCIS (Netherlands)

    Griffioen, Paul; Christiaanse, Rob; Hulstijn, Joris; Cerone, Antonio; Roveri, Marco

    2018-01-01

    Products can consist of many sub-assemblies and small disturbances in the process can lead to larger negative effects downstream. Such variances in production are a challenge from a quality control and operational risk management perspective but also it distorts the assurance processes from an

  4. 40 CFR 142.41 - Variance request.

    Science.gov (United States)

    2010-07-01

    ... Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a... initiation of the connection of the alternative raw water source or improvement of existing raw water source....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED...

  5. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  6. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  7. The size variance relationship of business firm growth rates.

    Science.gov (United States)

    Riccaboni, Massimo; Pammolli, Fabio; Buldyrev, Sergey V; Ponta, Linda; Stanley, H E

    2008-12-16

    The relationship between the size and the variance of firm growth rates is known to follow an approximate power-law behavior sigma(S) approximately S(-beta(S)) where S is the firm size and beta(S) approximately 0.2 is an exponent that weakly depends on S. Here, we show how a model of proportional growth, which treats firms as classes composed of various numbers of units of variable size, can explain this size-variance dependence. In general, the model predicts that beta(S) must exhibit a crossover from beta(0) = 0 to beta(infinity) = 1/2. For a realistic set of parameters, beta(S) is approximately constant and can vary from 0.14 to 0.2 depending on the average number of units in the firm. We test the model with a unique industry-specific database in which firm sales are given in terms of the sum of the sales of all their products. We find that the model is consistent with the empirically observed size-variance relationship.

  8. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  9. Residual basins

    International Nuclear Information System (INIS)

    D'Elboux, C.V.; Paiva, I.B.

    1980-01-01

    Exploration for uranium carried out over a major portion of the Rio Grande do Sul Shield has revealed a number of small residual basins developed along glacially eroded channels of pre-Permian age. Mineralization of uranium occurs in two distinct sedimentary units. The lower unit consists of rhythmites overlain by a sequence of black shales, siltstones and coal seams, while the upper one is dominated by sandstones of probable fluvial origin. (Author) [pt

  10. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  11. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia predi...... to daily, data. Our findings suggest that temporal variation in both risk-aversion and volatility-risk play an important role in determining stock market returns.......We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...

  12. Bias-variance decomposition in Genetic Programming

    Directory of Open Access Journals (Sweden)

    Kowaliw Taras

    2016-01-01

    Full Text Available We study properties of Linear Genetic Programming (LGP through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a the variance between runs is primarily due to initialization rather than the selection of training samples, (b parameters can be reasonably optimized to obtain gains in efficacy, and (c functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance—therefore, larger and more diverse function sets are always preferable.

  13. Source Characterization by the Allan Variance

    Science.gov (United States)

    Gattano, C.; Lambert, S.

    2016-12-01

    Until now, the main criteria for selecting geodetic sources were based on astrometric stability and structure at 8 GHz te{Fey2015}. But with more observations and the increase of accuracy, the statistical tools used to determine this stability become inappropriate with regards to sudden motions of the radiocenter. In this work, we propose to replace these tools by the Allan Variance te{Allan1966}, first used on VLBI sources by M. Feissel-Vernier te{Feissel2003}, leading to a new classification of sources into three groups according to the shape of the Allan Variance. In parallel, we combine two catalogs, the Large Quasar Astrometric Catalogue te{Souchay2015} and the Optical Characteristics of Astrometric Radio Sources te{Malkin2013}, in order to gather most physical characteristics known about these VLBI targets. By doing so, we may reveal physical criteria that may be useful in the selection of new targets for future VLBI observations.

  14. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers ...

  15. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  16. Prolonged persistence of PCR-detectable minimal residual disease after diagnosis or first relapse predicts poor outcome in childhood B-precursor acute lymphoblastic leukemia

    NARCIS (Netherlands)

    Steenbergen, E. J.; Verhagen, O. J.; van Leeuwen, E. F.; van den Berg, H.; Behrendt, H.; Slater, R. M.; von dem Borne, A. E.; van der Schoot, C. E.

    1995-01-01

    The follow up of minimal residual disease (MRD) in childhood B-precursor ALL by polymerase chain reaction (PCR) may be of help for further stratification of treatment protocols, to improve outcome. However, the clinical relevance of this approach has yet to be defined. We report the retrospective

  17. Sequence analysis and structure prediction of enoyl-CoA hydratase from Avicennia marina: implication of various amino acid residues on substrate-enzyme interactions.

    Science.gov (United States)

    Jabeen, Uzma; Salim, Asmat

    2013-10-01

    Enoyl-CoA hydratase catalyzes the hydration of 2-trans-enoyl-CoA into 3-hydroxyacyl-CoA. The present study focuses on the correlation between the functional and structural aspects of enoyl-CoA hydratase from Avicennia marina. We have used bioinformatics tools to construct and analyze 3D homology models of A. marina enoyl-CoA hydratase (AMECH) bound to different substrates and inhibitors and studied the residues involved in the ligand-enzyme interaction. Structural information obtained from the models was compared with those of the reported crystal structures. We observed that the overall folds were similar; however, AMECH showed few distinct structural changes which include structural variation in the mobile loop, formation and loss of certain interactions between the active site residues and substrates. Some changes were also observed within specific regions of the enzyme. Glu106 is almost completely conserved in sequences of the isomerases/hydratases including AMECH while Glu86 which is the other catalytic residue in most of the isomerases/hydratases is replaced by Gly and shows no interaction with the substrate. Asp114 is located within 4Å distance of the catalytic water which makes it a probable candidate for the second catalytic residue in AMECH. Another prominent feature of AMECH is the presence of structurally distinct mobile loop having a completely different coordination with the hydrophobic binding pocket of acyl portion of the substrate. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... is a semimartingale we recall the fundamental result that RV is a consistent (as M ) estimator of quadratic variation (QV). We express concern that without additional assumptions it seems difficult to give any measure of uncertainty of the RV in this context. The position dramatically changes when we work...... with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...

  19. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  20. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  1. Verification of the history-score moment equations for weight-window variance reduction

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Clell J [Los Alamos National Laboratory; Sood, Avneet [Los Alamos National Laboratory; Booth, Thomas E [Los Alamos National Laboratory; Shultis, J. Kenneth [KANSAS STATE UNIV.

    2010-12-06

    The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,

  2. An elementary components of variance analysis for multi-center quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1977-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de

  3. Allowing variance may enlarge the safe operating space for exploited ecosystems

    OpenAIRE

    Carpenter, Stephen R.; Brock, William A.; Folke, Carl; van Nes, Egbert H.; Scheffer, Marten

    2015-01-01

    Humans depend on ecosystems for food, water, pharmaceuticals, and other benefits. Ecosystem managers, industries, and the public want these benefits to be predictable and therefore have low variance over time. However, control of variance for short-term benefits leads to long-term fragility. Here we show that management to reduce short-term variability can drive ecosystems into degraded states, leading to long-term declines of ecosystem services. These risks can be avoided by strategies that ...

  4. Validity of a Residualized Dependent Variable after Pretest Covariance Adjustments: Still the Same Variable?

    Science.gov (United States)

    Nimon, Kim; Henson, Robin K.

    2015-01-01

    The authors empirically examined whether the validity of a residualized dependent variable after covariance adjustment is comparable to that of the original variable of interest. When variance of a dependent variable is removed as a result of one or more covariates, the residual variance may not reflect the same meaning. Using the pretest-posttest…

  5. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  6. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  7. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additi...... or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of Mat´ern covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  8. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...

  9. RESIDUAL RISK ASSESSMENTS - RESIDUAL RISK ...

    Science.gov (United States)

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Coke Ovens. These assesments utilize existing models and data bases to examine the multi-media and multi-pollutant impacts of air toxics emissions on human health and the environment. Details on the assessment process and methodologies can be found in EPA's Residual Risk Report to Congress issued in March of 1999 (see web site). To assess the health risks imposed by air toxics emissions from Coke Ovens to determine if control technology standards previously established are adequately protecting public health.

  10. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  11. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  12. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    Science.gov (United States)

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  13. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Science.gov (United States)

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  14. Residual nilpotence and residual solubility of groups

    International Nuclear Information System (INIS)

    Mikhailov, R V

    2005-01-01

    The properties of the residual nilpotence and the residual solubility of groups are studied. The main objects under investigation are the class of residually nilpotent groups such that each central extension of these groups is also residually nilpotent and the class of residually soluble groups such that each Abelian extension of these groups is residually soluble. Various examples of groups not belonging to these classes are constructed by homological methods and methods of the theory of modules over group rings. Several applications of the theory under consideration are presented and problems concerning the residual nilpotence of one-relator groups are considered.

  15. Prediction of residual stresses induced by TIG welding of a martensitic steel (X10CrMoVNb9-1)

    International Nuclear Information System (INIS)

    Roux, G.M.

    2007-11-01

    Within the frame of the development of very high temperature nuclear reactors (VHTR) with gas as heat transfer fluid, some technological challenges are to be faced because of these high temperatures, notably the selection of the material used for the reactor vessel and its welding process. This research thesis aims at developing and validating numerical tools and behaviour models for the thermal-metallurgical-mechanical simulation of the multi-pass TIG welding process. The first part describes the development of simple welding tests (Disk-Spot and Disk-Cycle), the use of temperature and displacement measurement during these tests, and deep residual stress measurements, as well as the identification of the thermal limit conditions for the Disk-Spot test. It then discusses the choice and the identification of the thermal-metallurgical-mechanical behaviour model, with a particular attention to phase transformations and to their coupling with thermal and mechanical aspects. Experimental and simulation results are compared, notably in terms of residual stresses. The numerical implementation of the behaviour model and its integration into the CAST3M finite element software are also described

  16. Use of genomic models to study genetic control of environmental variance

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    . The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...

  17. Fatigue life prediction in composites

    CSIR Research Space (South Africa)

    Huston, RJ

    1994-01-01

    Full Text Available epoxy were used to test residual strength and residual stiffness models. Further fatigue tests were carried out under spectrum loading so that the results could be correlated with the cumulative damage predicted by the residual strength model....

  18. Further results on variances of local stereological estimators

    DEFF Research Database (Denmark)

    Pawlas, Zbynek; Jensen, Eva B. Vedel

    2006-01-01

    In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability...... in the particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes....

  19. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  20. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  1. An elementary components of variance analysis for multi-centre quality control

    International Nuclear Information System (INIS)

    Munson, P.J.; Rodbard, D.

    1978-01-01

    The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean

  2. Supervisor variance in psychotherapy outcome in routine practice.

    Science.gov (United States)

    Rousmaniere, Tony G; Swift, Joshua K; Babins-Wagner, Robbie; Whipple, Jason L; Berzins, Sandy

    2016-01-01

    Although supervision has long been considered as a means for helping trainees develop competencies in their clinical work, little empirical research has been conducted examining the influence of supervision on client treatment outcomes. Specifically, one might ask whether differences in supervisors can predict/explain whether clients will make a positive or negative change through psychotherapy. In this naturalistic study, we used a large (6521 clients seen by 175 trainee therapists who were supervised by 23 supervisors) 5-year archival data-set of psychotherapy outcomes from a private nonprofit mental health center to test whether client treatment outcomes (as measured by the OQ-45.2) differed depending on who was providing the supervision. Hierarchical linear modeling was used with clients (Level 1) nested within therapists (Level 2) who were nested within supervisors (Level 3). In the main analysis, supervisors explained less than 1% of the variance in client psychotherapy outcomes. Possible reasons for the lack of variability between supervisors are discussed.

  3. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  4. A formal statistical approach to representing uncertainty in rainfall-runoff modelling with focus on residual analysis and probabilistic output evaluation - Distinguishing simulation and prediction

    DEFF Research Database (Denmark)

    Breinholt, Anders; Møller, Jan Kloppenborg; Madsen, Henrik

    2012-01-01

    evaluation of the modelled output, and we attach particular importance to inspecting the residuals of the model outputs and improving the model uncertainty description. We also introduce the probabilistic performance measures sharpness, reliability and interval skill score for model comparison...... and for checking the reliability of the confidence bounds. Using point rainfall and evaporation data as input and flow measurements from a sewer system for model conditioning, a state space model is formulated that accounts for three different flow contributions: wastewater from households, and fast rainfall......-runoff from paved areas and slow rainfall-dependent infiltration-inflow from unknown sources. We consider two different approaches to evaluate the model output uncertainty, the output error method that lumps all uncertainty into the observation noise term, and a method based on Stochastic Differential...

  5. Measurement and prediction of soil biological processes resulting in denitrification. Part of a coordinated programme on isotopic tracer-aided studies of agrochemical residue - soil biota interactions

    International Nuclear Information System (INIS)

    Rolston, D.E.

    1982-08-01

    The water soluble carbon from soil extracts was taken from a two hundred point grid established on a 1.2 ha field. The sampling was in the fall after the harvest of a sorghum crop. The concentrations ranged from 23.8 ppm to 274.2 ppm. Over 90 per cent of the concentrations were grouped around the mean of 40.3 ppm. The higher values caused the distribution to be greatly skewed such that neither normal nor log normal distributions characterized the data very well. The moisture content from the same samples followed normal distribution. Changes in the mean, the variance and the distribution of water soluble carbon were followed on 0.4 ha of the 1.2 ha in a grid of sixty points during a crop of wheat and a subsequent crop of sorghum. The mean increased in the spring, decreased in the summer and increased again in the fall. The spring and summer concentrations are well characterized by log normal distributions. The spatial dependence of water soluble carbon was examined on a fifty-five point transect across the field spaced every 1.37m. The variogram indicated little or no dependence at this spacing. This document is out of INIS subject scope and is included because it is published by the IAEA

  6. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  7. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  8. Variance component and heritability estimates of early growth traits ...

    African Journals Online (AJOL)

    Variance component and heritability estimates of early growth traits in the Elsenburg Dormer sheep ... of variance and co- variance components. In recent years, heritability estimates of growth traits have been reported for many breeds of sheep. However, little information ..... Modeling genetic evaluation systems. Project no.

  9. Model Based Analysis of the Variance Estimators for the Combined ...

    African Journals Online (AJOL)

    In this paper we study the variance estimators for the combined ratio estimator under an appropriate asymptotic framework. An alternative bias-robust variance estimator, different from that suggested by Valliant (1987), is derived. Several variance estimators are compared in an empirical study using a real population.

  10. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A20 Inhibits β-Cell Apoptosis by Multiple Mechanisms and Predicts Residual β-Cell Function in Type 1 Diabetes

    DEFF Research Database (Denmark)

    Fukaya, Makiko; Brorsson, Caroline A; Meyerovich, Kira

    2016-01-01

    signaling via v-akt murine thymoma viral oncogene homolog (Akt) and consequently inhibition of the intrinsic apoptotic pathway. Finally, in a cohort of T1D children, we observed that the risk allele of the rs2327832 single nucleotide polymorphism of TNFAIP3 predicted lower C-peptide and higher hemoglobin A1...

  12. The influence of local spring temperature variance on temperature sensitivity of spring phenology.

    Science.gov (United States)

    Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe

    2014-05-01

    The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies. © 2013 John Wiley & Sons Ltd.

  13. Origin and consequences of the relationship between protein mean and variance.

    Directory of Open Access Journals (Sweden)

    Francesco Luigi Massimo Vallania

    Full Text Available Cell-to-cell variance in protein levels (noise is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69. Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6 and accurately predicts protein variance across the yeast proteome (r2 = 0.935. Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  14. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    International Nuclear Information System (INIS)

    Hyung, Jin Shim; Beom, Seok Han; Chang, Hyo Kim

    2003-01-01

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  15. Origin and consequences of the relationship between protein mean and variance.

    Science.gov (United States)

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome.

  16.  Self-determination theory fails to explain additional variance in well-being

    DEFF Research Database (Denmark)

    Olesen, Martin Hammershøj; Schnieber, Anette; Tønnesvang, Jan

    2008-01-01

    This study investigates relations between the five-factor model (FFM) and self-determination theory in predicting well-being. Nine-hundred-and-sixty-four students completed e-based measures of extroversion & neuroticism (NEO-FFI); autonomous- & impersonal general causality orientation (GCOS...... controlling for extroversion (PTheory seems inadequate in explaining variance in well-being supporting an integration with FFM....

  17. Identification of family-specific residue packing motifs and their use for structure-based protein function prediction: II. Case studies and applications

    Science.gov (United States)

    Bandyopadhyay, Deepak; Huan, Jun; Prins, Jan; Snoeyink, Jack; Wang, Wei; Tropsha, Alexander

    2009-11-01

    This paper describes several case studies concerning protein function inference from its structure using our novel approach described in the accompanying paper. This approach employs family-specific motifs, i.e. three-dimensional amino acid packing patterns that are statistically prevalent within a protein family. For our case studies we have selected families from the SCOP and EC classifications and analyzed the discriminating power of the motifs in depth. We have devised several benchmarks to compare motifs mined from unweighted topological graph representations of protein structures with those from distance-labeled (weighted) representations, demonstrating the superiority of the latter for function inference in most families. We have tested the robustness of our motif library by inferring the function of new members added to SCOP families, and discriminating between several families that are structurally similar but functionally divergent. Furthermore we have applied our method to predict function for several proteins characterized in structural genomics projects, including orphan structures, and we discuss several selected predictions in depth. Some of our predictions have been corroborated by other computational methods, and some have been validated by independent experimental studies, validating our approach for protein function inference from structure.

  18. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Tauchen, George; Zhou, Hao

    of the time series variation in post 1990 aggregate stock market returns, with high (low) premia predicting high (low) future returns. Our empirical results depend crucially on the use of "model-free," as opposed to Black- Scholes, options implied volatilities, along with accurate realized variation measures...

  19. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    Science.gov (United States)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  20. Two-Dimensional UV Absorption Correlation Spectroscopy as a Method for the Detection of Thiamethoxam Residue in Tea

    Science.gov (United States)

    Zhang, J.; Zhao, Zh.; Wang, L.; Zhu, X.; Shen, L.; Yu, Y.

    2015-05-01

    Two-dimensional correlation spectroscopy (2D-COS) combined with UV absorption spectroscopy was evaluated as a technique for the identification of spectral regions associated with the residues of thiamethoxam in tea. There is only one absorption peak at 275 nm in the absorption spectrum of a mixture of thiamethoxam and tea, which is the absorption peak of tea. Based on 2D-COS, the absorption peak of thiamethoxam at 250 nm is extracted from the UV spectra of the mixture. To determine the residue of thiamethoxam in tea, 250 nm is selected as the measured wavelength, at which the fitting result is as follows: the residual sum of squares is 0.01375, standard deviation R2 is 0.99068, and F value is 426. Statistical analysis shows that there is a significant linear relationship between the concentration of thiamethoxam in tea and the absorbance at 250 nm in the UV spectra of the mixture. Moreover, the average prediction error is 0.0033 and the prediction variance is 0.1654, indicating good predictive result. Thus, the UV absorption spectrum can be used as a measurement method for rapid detection of thiamethoxam residues in tea.

  1. Partitioning of genomic variance using biological pathways

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Janss, Luc; Madsen, Per

    and that these variants are enriched for genes that are connected in biological pathways or for likely functional effects on genes. These biological findings provide valuable insight for developing better genomic models. These are statistical models for predicting complex trait phenotypes on the basis of SNP...... action of multiple SNPs in genes, biological pathways or other external findings on the trait phenotype. As proof of concept we have tested the modelling framework on several traits in dairy cattle....

  2. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling

    Directory of Open Access Journals (Sweden)

    Ze Yu

    2016-07-01

    Full Text Available Compared with low-Earth orbit synthetic aperture radar (SAR, a geosynchronous (GEO SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.

  3. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  4. Variances as order parameter and complexity measure for random Boolean networks

    International Nuclear Information System (INIS)

    Luque, Bartolo; Ballesteros, Fernando J; Fernandez, Manuel

    2005-01-01

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems

  5. Variances as order parameter and complexity measure for random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)

    2005-02-04

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.

  6. The mean and variance of environmental temperature interact to determine physiological tolerance and fitness.

    Science.gov (United States)

    Bozinovic, Francisco; Bastías, Daniel A; Boher, Francisca; Clavijo-Baquet, Sabrina; Estay, Sergio A; Angilletta, Michael J

    2011-01-01

    Global climate change poses one of the greatest threats to biodiversity. Most analyses of the potential biological impacts have focused on changes in mean temperature, but changes in thermal variance will also impact organisms and populations. We assessed the combined effects of the mean and variance of temperature on thermal tolerances, organismal survival, and population growth in Drosophila melanogaster. Because the performance of ectotherms relates nonlinearly to temperature, we predicted that responses to thermal variation (±0° or ±5°C) would depend on the mean temperature (17° or 24°C). Consistent with our prediction, thermal variation enhanced the rate of population growth (r(max)) at a low mean temperature but depressed this rate at a high mean temperature. The interactive effect on fitness occurred despite the fact that flies improved their heat and cold tolerances through acclimation to thermal conditions. Flies exposed to a high mean and a high variance of temperature recovered from heat coma faster and survived heat exposure better than did flies that developed at other conditions. Relatively high survival following heat exposure was associated with low survival following cold exposure. Recovery from chill coma was affected primarily by the mean temperature; flies acclimated to a low mean temperature recovered much faster than did flies acclimated to a high mean temperature. To develop more realistic predictions about the biological impacts of climate change, one must consider the interactions between the mean environmental temperature and the variance of environmental temperature.

  7. Estimation of breeding values for mean and dispersion, their variance and correlation using double hierarchical generalized linear models.

    Science.gov (United States)

    Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L

    2012-12-01

    The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).

  8. Genetic selection for increased mean and reduced variance of twinning rate in Belclare ewes.

    Science.gov (United States)

    Cottle, D J; Gilmour, A R; Pabiou, T; Amer, P R; Fahey, A G

    2016-04-01

    It is sometimes possible to breed for more uniform individuals by selecting animals with a greater tendency to be less variable, that is, those with a smaller environmental variance. This approach has been applied to reproduction traits in various animal species. We have evaluated fecundity in the Irish Belclare sheep breed by analyses of flocks with differing average litter size (number of lambs per ewe per year, NLB) and have estimated the genetic variance in environmental variance of lambing traits using double hierarchical generalized linear models (DHGLM). The data set comprised of 9470 litter size records from 4407 ewes collected in 56 flocks. The percentage of pedigreed lambing ewes with singles, twins and triplets was 30, 54 and 14%, respectively, in 2013 and has been relatively constant for the last 15 years. The variance of NLB increases with the mean in this data; the correlation of mean and standard deviation across sires is 0.50. The breeding goal is to increase the mean NLB without unduly increasing the incidence of triplets and higher litter sizes. The heritability estimates for lambing traits were NLB, 0.09; triplet occurrence (TRI) 0.07; and twin occurrence (TWN), 0.02. The highest and lowest twinning flocks differed by 23% (75% versus 52%) in the proportion of ewes lambing twins. Fitting bivariate sire models to NLB and the residual from the NLB model using a double hierarchical generalized linear model (DHGLM) model found a strong genetic correlation (0.88 ± 0.07) between the sire effect for the magnitude of the residual (VE ) and sire effects for NLB, confirming the general observation that increased average litter size is associated with increased variability in litter size. We propose a threshold model that may help breeders with low litter size increase the percentage of twin bearers without unduly increasing the percentage of ewes bearing triplets in Belclare sheep. © 2015 Blackwell Verlag GmbH.

  9. ON THE VARIANCE OF LOCAL STEREOLOGICAL VOLUME ESTIMATORS

    Directory of Open Access Journals (Sweden)

    Eva B Vedel Jensen

    2011-05-01

    Full Text Available In the present paper, the variance of local stereological volume estimators is studied. For isotropic designs, the variance depends on the shape of the body under study and the choice of reference point. It can be expressed in terms of an equivalent star body. For a collection of triaxial ellipsoids the variance is determined by simulation. The problem of estimating particle size distributions from central sections through the particles is also discussed.

  10. A Paradox of Genetic Variance in Epigamic Traits: Beyond "Good Genes" View of Sexual Selection.

    Science.gov (United States)

    Radwan, Jacek; Engqvist, Leif; Reinhold, Klaus

    Maintenance of genetic variance in secondary sexual traits, including bizarre ornaments and elaborated courtship displays, is a central problem of sexual selection theory. Despite theoretical arguments predicting that strong sexual selection leads to a depletion of additive genetic variance, traits associated with mating success show relatively high heritability. Here we argue that because of trade-offs associated with the production of costly epigamic traits, sexual selection is likely to lead to an increase, rather than a depletion, of genetic variance in those traits. Such trade-offs can also be expected to contribute to the maintenance of genetic variation in ecologically relevant traits with important implications for evolutionary processes, e.g. adaptation to novel environments or ecological speciation. However, if trade-offs are an important source of genetic variation in sexual traits, the magnitude of genetic variation may have little relevance for the possible genetic benefits of mate choice.

  11. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  12. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  13. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  14. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    " or "non-informative" with respect to genetic (co)variance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean) being completely confounded with a single residual on the underlying liability scale. For threshold models...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative......, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full...

  15. Postinduction Minimal Residual Disease Predicts Outcome and Benefit From Allogeneic Stem Cell Transplantation in Acute Myeloid Leukemia With NPM1 Mutation: A Study by the Acute Leukemia French Association Group.

    Science.gov (United States)

    Balsat, Marie; Renneville, Aline; Thomas, Xavier; de Botton, Stéphane; Caillot, Denis; Marceau, Alice; Lemasle, Emilie; Marolleau, Jean-Pierre; Nibourel, Olivier; Berthon, Céline; Raffoux, Emmanuel; Pigneux, Arnaud; Rodriguez, Céline; Vey, Norbert; Cayuela, Jean-Michel; Hayette, Sandrine; Braun, Thorsten; Coudé, Marie Magdeleine; Terre, Christine; Celli-Lebras, Karine; Dombret, Hervé; Preudhomme, Claude; Boissel, Nicolas

    2017-01-10

    Purpose This study assessed the prognostic impact of postinduction NPM1-mutated ( NPM1m) minimal residual disease (MRD) in young adult patients (age, 18 to 60 years) with acute myeloid leukemia, and addressed the question of whether NPM1m MRD may be used as a predictive factor of allogeneic stem cell transplantation (ASCT) benefit. Patients and Methods Among 229 patients with NPM1m who were treated in the Acute Leukemia French Association 0702 (ALFA-0702) trial, MRD evaluation was available in 152 patients in first remission. Patients with nonfavorable AML according to the European LeukemiaNet (ELN) classification were eligible for ASCT in first remission. Results After induction therapy, patients who did not achieve a 4-log reduction in NPM1m peripheral blood-MRD (PB-MRD) had a higher cumulative incidence of relapse (subhazard ratio [SHR], 5.83; P benefit was not observed in those with a > 4-log reduction in PB-MRD, with a significant interaction between ASCT effect and PB-MRD response ( P = .024 and .027 for disease-free survival and OS, respectively). Conclusion Our study supports the strong prognostic significance of early NPM1m PB-MRD, independent of the cytogenetic and molecular context. Moreover, NPM1m PB-MRD may be used as a predictive factor for ASCT indication.

  16. The genetic and environmental roots of variance in negativity toward foreign nationals.

    Science.gov (United States)

    Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer

    2015-03-01

    This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups.

  17. The efficiency of the crude oil markets. Evidence from variance ratio tests

    International Nuclear Information System (INIS)

    Charles, Amelie; Darne, Olivier

    2009-01-01

    This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)

  18. Novel feature for catalytic protein residues reflecting interactions with other residues.

    Directory of Open Access Journals (Sweden)

    Yizhou Li

    Full Text Available Owing to their potential for systematic analysis, complex networks have been widely used in proteomics. Representing a protein structure as a topology network provides novel insight into understanding protein folding mechanisms, stability and function. Here, we develop a new feature to reveal correlations between residues using a protein structure network. In an original attempt to quantify the effects of several key residues on catalytic residues, a power function was used to model interactions between residues. The results indicate that focusing on a few residues is a feasible approach to identifying catalytic residues. The spatial environment surrounding a catalytic residue was analyzed in a layered manner. We present evidence that correlation between residues is related to their distance apart most environmental parameters of the outer layer make a smaller contribution to prediction and ii catalytic residues tend to be located near key positions in enzyme folds. Feature analysis revealed satisfactory performance for our features, which were combined with several conventional features in a prediction model for catalytic residues using a comprehensive data set from the Catalytic Site Atlas. Values of 88.6 for sensitivity and 88.4 for specificity were obtained by 10-fold cross-validation. These results suggest that these features reveal the mutual dependence of residues and are promising for further study of structure-function relationship.

  19. Body Composition Explains Greater Variance in Weight-for-Length Z-scores than Mid-Upper Arm Circumference during Infancy - A Secondary Data Analysis

    International Nuclear Information System (INIS)

    Grijalva-Eternod, Carlos; Andersen, Gregers Stig; Girma, Tsinuel; Admassu, Bitiya; Kæstel, Pernille; Michaelsen, Kim F; Friis, Henrik; Wells, Jonathan CK

    2014-01-01

    with length at all ages (correlation values range 0.42 to 0.61) compared to WLZ which correlated negatively with length only between birth and 2.5 months (range -12 to -15). Both MUAC and WLZ were strongly and positively correlated with LM and FM standardised residuals with correlation values being systematically greater for WLZ (range 0.53 – 0.82 and 0.54 – 0.77, for LM and FM respectively) than for MUAC (range 0.28 – 0.42 and 0.45 – 0.63, respectively). Together LM and FM standardised residuals (controlled for sex) explained over 93% of WLZ variance at all ages (see table 1). In contrast, LM and FM residuals explained between 37 – 52% MUAC variance. Conclusions: LM and FM values have stronger associations with WLZ and together they explain almost all the variance of this anthropometric indicator compared to MUAC in children aged 0-6 months. Given these findings, it is unlikely that any greater capacity of MUAC to predict mortality among infants can be explained by the overall variability in body composition. (author)

  20. Nonintellective intelligence and personality: variance shared by the Constructive Thinking Inventory and the Myers-Briggs Type Indicator.

    Science.gov (United States)

    Spirrison, C L; Gordy, C C

    1994-04-01

    The Constructive Thinking Inventory (CTI; Epstein & Meier, 1989), a recently developed scale assessing patterns of habitual everyday thoughts, was compared with the Myers-Briggs Type Indicator (MBTI; Myers & McCaulley, 1985) to ascertain areas of common variance. CTI and MBTI data from 65 men and 109 women were evaluated. A series of standard multiple regression procedures indicated that, in most instances, CTI scales were predictive of MBTI continuous scores, although gender mediated several of the effects. The results suggest that the variance assessed by the CTI is similar to that addressed by traditional measures of personality but that the CTI partitions the variance in an atypical, yet coherent, manner.

  1. Waste Isolation Pilot Plant No-Migration Variance Petition

    International Nuclear Information System (INIS)

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA section 3004(d) and 40 CFR section 268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA's NOD and met with the EPA's reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0

  2. Escape from predators and genetic variance in birds.

    Science.gov (United States)

    Jiang, Y; Møller, A P

    2017-11-01

    Predation is a common cause of death in numerous organisms, and a host of antipredator defences have evolved. Such defences often have a genetic background as shown by significant heritability and microevolutionary responses towards weaker defences in the absence of predators. Flight initiation distance (FID) is the distance at which an individual animal takes flight when approached by a human, and hence, it reflects the life-history compromise between risk of predation and the benefits of foraging. Here, we analysed FID in 128 species of birds in relation to three measures of genetic variation, band sharing coefficient for minisatellites, observed heterozygosity and inbreeding coefficient for microsatellites in order to test whether FID was positively correlated with genetic variation. We found consistently shorter FID for a given body size in the presence of high band sharing coefficients, low heterozygosity and high inbreeding coefficients in phylogenetic analyses after controlling statistically for potentially confounding variables. These findings imply that antipredator behaviour is related to genetic variance. We predict that many threatened species with low genetic variability will show reduced antipredator behaviour and that subsequent predator-induced reductions in abundance may contribute to unfavourable population trends for such species. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  3. A paradox of genetic variance in epigamic traits: beyond „good genes” view of sexual selection

    OpenAIRE

    Radwan, Jacek; Engqvist, Leif Martin; Reinhold, Klaus

    2016-01-01

    Maintenance of genetic variance in secondary sexual traits, including bizarre ornaments and elaborated courtship displays, is a central problem of sexual selection theory. Despite theoretical arguments predicting that strong sexual selection leads to a depletion of additive genetic variance, traits associated with mating success show relatively high heritability. Here we argue that because of trade-offs associated with the production of costly epigamic traits, sexual selection is likely to le...

  4. Ulnar variance as a predictor of persistent instability following Galeazzi fracture-dislocations.

    Science.gov (United States)

    Takemoto, Richelle; Sugi, Michelle; Immerman, Igor; Tejwani, Nirmal; Egol, Kenneth A

    2014-03-01

    We investigated the radiographic parameters that may predict distal radial ulnar joint (DRUJ) instability in surgically treated radial shaft fractures. In our clinical experience, there are no previously reported radiographic parameters that are universally predictive of DRUJ instability following radial shaft fracture. Fifty consecutive patients, ages 20-79 years, with unilateral radial shaft fractures and possible associated DRUJ injury were retrospectively identified over a 5-year period. Distance from radial carpal joint (RCJ) to fracture proportional to radial shaft length, ulnar variance, and ulnar styloid fractures were correlated with DRUJ instability after surgical treatment. Twenty patients had persistent DRUJ incongruence/instability following fracture fixation. As a proportion of radial length, the distance from the RCJ to the fracture line did not significantly differ between those with persistent DRUJ instability and those without (p = 0.34). The average initial ulnar variance was 5.5 mm (range 2-12 mm, SD = 3.2) in patients with DRUJ instability and 3.8 mm (range 0-11 mm, SD = 3.5) in patients without. Only 4/20 patients (20%) with DRUJ instability had normal ulnar variance (-2 to +2 mm) versus 15/30 (50%) patients without (p = 0.041). In the setting of a radial shaft fracture, ulnar variance greater or less than 2 mm was associated with a greater likelihood of DRUJ incongruence/instability following fracture fixation.

  5. Relative variance of the mean-squared pressure in multimode media: rehabilitating former approaches.

    Science.gov (United States)

    Monsef, Florian; Cozza, Andrea; Rodrigues, Dominique; Cellard, Patrick; Durocher, Jean-Noel

    2014-11-01

    The commonly accepted model for the relative variance of transmission functions in room acoustics, derived by Weaver, aims at including the effects of correlation between eigenfrequencies. This model is based on an analytical expression of the relative variance derived by means of an approximated correlation function. The relevance of the approximation used for modeling such correlation is questioned here. Weaver's model was motivated by the fact that earlier models derived by Davy and Lyon assumed independent eigenfrequencies and led to an overestimation with respect to relative variances found in practice. It is shown here that this overestimation is due to an inadequate truncation of the modal expansion, and to an improper choice of the frequency range over which ensemble averages of the eigenfrequencies is defined. An alternative definition is proposed, settling the inconsistency; predicted relative variances are found to be in good agreement with experimental data. These results rehabilitate former approaches that were based on independence assumptions between eigenfrequencies. Some former studies showed that simpler correlation models could be used to predict the statistics of some field-related physical quantity at low modal overlap. The present work confirms that this is also the case when dealing with transmission functions.

  6. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  7. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  8. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  9. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  10. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  11. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  12. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  13. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  14. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  15. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  16. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  17. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  18. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound.

    Science.gov (United States)

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.

  19. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  20. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  1. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    Science.gov (United States)

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased

  2. A general method for describing sources of variance in clinical trials, especially operator variance, in order to improve transfer of research knowledge to practice.

    Science.gov (United States)

    Chambers, David W; Leknius, Casimir; Reid, Laura

    2009-04-01

    The purpose of this study was to demonstrate how the skill level of the operator and the clinical challenge provided by the patient affect the outcomes of clinical research in ways that may have hidden influences on the applicability of that research to practice. Rigorous research designs that control or eliminate operator or patient factors as sources of variance achieve improved statistical significance for study hypotheses. These procedures, however, mask sources of variance that influence the applicability of the conclusions. There are summary data that can be added to reports of clinical trials to permit potential users of the findings to identify the most important sources of variation and to predict the likely outcomes of adopting products and procedures reported in the literature. Provisional crowns were constructed in a laboratory setting in a fully crossed, random-factor model with two levels of material (Treatment), two skill levels of students (Operator), and restorations of two levels of difficulty (Patient). The levels of the Treatment, Operator, and Patient factors used in the study were chosen to ensure that the findings from the study could be transferred to practice settings in a predictable fashion. The provisional crowns were scored independently by two raters using the criteria for technique courses in the school where the research was conducted. The Operator variable accounted for 38% of the variance, followed by Treatment-by-Operator interaction (17%), Treatment (17%), and other factors and their combinations in smaller amounts. Regression equations were calculated for each Treatment material that can be used to predict outcomes in various potential transfer applications. It was found that classical analyses for differences between materials (the Treatment variable) would yield inconsistent results under various sampling systems within the parameters of the study. Operator and Treatment-by-Operator interactions appear to be significant and

  3. Improved crop residue cover estimates by coupling spectral indices for residue and moisture

    Science.gov (United States)

    Remote sensing assessment of soil residue cover (fR) and tillage intensity will improve our predictions of the impact of agricultural practices and promote sustainable management. Spectral indices for estimating fR are sensitive to soil and residue water content, therefore, the uncertainty of estima...

  4. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee

    OpenAIRE

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-01-01

    Abstract Background The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of ...

  5. On the Structural Context and Identification of Enzyme Catalytic Residues

    Science.gov (United States)

    Chien, Yu-Tung; Huang, Shao-Wei

    2013-01-01

    Enzymes play important roles in most of the biological processes. Although only a small fraction of residues are directly involved in catalytic reactions, these catalytic residues are the most crucial parts in enzymes. The study of the fundamental and unique features of catalytic residues benefits the understanding of enzyme functions and catalytic mechanisms. In this work, we analyze the structural context of catalytic residues based on theoretical and experimental structure flexibility. The results show that catalytic residues have distinct structural features and context. Their neighboring residues, whether sequence or structure neighbors within specific range, are usually structurally more rigid than those of noncatalytic residues. The structural context feature is combined with support vector machine to identify catalytic residues from enzyme structure. The prediction results are better or comparable to those of recent structure-based prediction methods. PMID:23484160

  6. Minimal residual disease monitoring by quantitative RT-PCR in core binding factor AML allows risk stratification and predicts relapse: results of the United Kingdom MRC AML-15 trial.

    Science.gov (United States)

    Yin, John A Liu; O'Brien, Michelle A; Hills, Robert K; Daly, Sarah B; Wheatley, Keith; Burnett, Alan K

    2012-10-04

    The clinical value of serial minimal residual disease (MRD) monitoring in core binding factor (CBF) acute myeloid leukemia (AML) by quantitative RT-PCR was prospectively assessed in 278 patients [163 with t(8;21) and 115 with inv(16)] entered in the United Kingdom MRC AML 15 trial. CBF transcripts were normalized to 10(5) ABL copies. At remission, after course 1 induction chemotherapy, a > 3 log reduction in RUNX1-RUNX1T1 transcripts in BM in t(8;21) patients and a > 10 CBFB-MYH11 copy number in peripheral blood (PB) in inv(16) patients were the most useful prognostic variables for relapse risk on multivariate analysis. MRD levels after consolidation (course 3) were also informative. During follow-up, cut-off MRD thresholds in BM and PB associated with a 100% relapse rate were identified: for t(8;21) patients BM > 500 copies, PB > 100 copies; for inv(16) patients, BM > 50 copies and PB > 10 copies. Rising MRD levels on serial monitoring accurately predicted hematologic relapse. During follow-up, PB sampling was equally informative as BM for MRD detection. We conclude that MRD monitoring by quantitative RT-PCR at specific time points in CBF AML allows identification of patients at high risk of relapse and could now be incorporated in clinical trials to evaluate the role of risk directed/preemptive therapy.

  7. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  8. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  9. Relative turbulent transport efficiency and flux-variance relationships of temperature and water vapor

    Science.gov (United States)

    Hsieh, C. I.

    2016-12-01

    This study investigated the relative transport efficiency and flux-variance relationships of temperature and water vapor, and examined the performance of using this method for predicting sensible heat (H) and water vapor (LE) fluxes with eddy-covariance measured flux data at three different ecosystems: grassland, paddy rice field, and forest.The H and LE estimations were found to be in good agreement with the measurements over the three fields. The prediction accuracy of LE could be improved by around 15% if the predictions were obtained by the flux-variance method in conjunction with measured sensible heat fluxes. Moreover, the paddy rice field was found to be a special case where water vapor follows flux-variance relation better than heat does. The flux budget equations of heat and water vapor were applied to explain this phenomenon. Our results also showed that heat and water vapor were transported with the same efficiency above the grassland and rice paddy. For the forest, heat was transported 20% more efficiently than evapotranspiration.

  10. Blinded sample size re-estimation in superiority and noninferiority trials: bias versus variance in variance estimation.

    Science.gov (United States)

    Friede, Tim; Kieser, Meinhard

    2013-01-01

    The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Knowledge extraction algorithm for variances handling of CP using integrated hybrid genetic double multi-group cooperative PSO and DPSO.

    Science.gov (United States)

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2012-04-01

    Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.

  12. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  13. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  14. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  15. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  16. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  17. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  18. On-Line Estimation of Allan Variance Parameters

    National Research Council Canada - National Science Library

    Ford, J

    1999-01-01

    ... (Inertial Measurement Unit) gyros and accelerometers. The on-line method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph...

  19. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  20. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  1. Bliss Points in Mean-Variance Portfolio Models

    OpenAIRE

    David S. Jones; V. Vance Roley

    1981-01-01

    When all financial assets have risky returns, the mean-variance portfolio model is potentially subject to two types of bliss points. One bliss point arises when a von Neumann-Morgenstern utility function displays negative marginal utility for sufficiently large end-of-period wealth, such as in quadratic utility. The second type of bliss point involves satiation in terms of beginning-of-period wealth and afflicts many commonly used mean-variance preference functions. This paper shows that the ...

  2. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  3. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  4. Factors explaining variance in perceived pain in women with fibromyalgia

    OpenAIRE

    Malt, Eva Albertsen; Olafsson, Snorri; Lund, Anders; Ursin, Holger

    2002-01-01

    Abstract Background We hypothesized that a substantial proportion of the subjectively experienced variance in pain in fibromyalgia patients would be explained by psychological factors alone, but that a combined model, including neuroendocrine and autonomic factors, would give the most parsimonious explanation of variance in pain. Methods Psychometric assessment included McGill Pain Questionnaire, General Health Questionnaire, Hospital Anxiety and Depression Rating Scale, Eysenck personality I...

  5. Quantitative genetic variance and multivariate clines in the Ivyleaf morning glory, Ipomoea hederacea.

    Science.gov (United States)

    Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R

    2014-08-19

    Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.

    Science.gov (United States)

    Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K

    2004-06-01

    The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.

  7. Changes in variance explained by top SNP windows over generations for three traits in broiler chicken.

    Science.gov (United States)

    Fragomeni, Breno de Oliveira; Misztal, Ignacy; Lourenco, Daniela Lino; Aguilar, Ignacio; Okimoto, Ronald; Muir, William M

    2014-01-01

    The purpose of this study was to determine if the set of genomic regions inferred as accounting for the majority of genetic variation in quantitative traits remain stable over multiple generations of selection. The data set contained phenotypes for five generations of broiler chicken for body weight, breast meat, and leg score. The population consisted of 294,632 animals over five generations and also included genotypes of 41,036 single nucleotide polymorphism (SNP) for 4,866 animals, after quality control. The SNP effects were calculated by a GWAS type analysis using single step genomic BLUP approach for generations 1-3, 2-4, 3-5, and 1-5. Variances were calculated for windows of 20 SNP. The top ten windows for each trait that explained the largest fraction of the genetic variance across generations were examined. Across generations, the top 10 windows explained more than 0.5% but less than 1% of the total variance. Also, the pattern of the windows was not consistent across generations. The windows that explained the greatest variance changed greatly among the combinations of generations, with a few exceptions. In many cases, a window identified as top for one combination, explained less than 0.1% for the other combinations. We conclude that identification of top SNP windows for a population may have little predictive power for genetic selection in the following generations for the traits here evaluated.

  8. The density variance-Mach number relation in supersonic turbulence - I. Isothermal, magnetized gas

    Science.gov (United States)

    Molina, F. Z.; Glover, S. C. O.; Federrath, C.; Klessen, R. S.

    2012-07-01

    It is widely accepted that supersonic, magnetized turbulence plays a fundamental role for star formation in molecular clouds. It produces the initial dense gas seeds out of which new stars can form. However, the exact relation between gas compression, turbulent Mach number and magnetic field strength is still poorly understood. Here, we introduce and test an analytical prediction for the relation between the density variance and the rms Mach number ? in supersonic, isothermal, magnetized turbulent flows. We approximate the density and velocity structure of the interstellar medium as a superposition of shock waves. We obtain the density contrast considering the momentum equation for a single magnetized shock and extrapolate this result to the entire cloud. Depending on the field geometry, we then make three different assumptions based on observational and theoretical constraints: B independent of ρ, B∝ρ1/2 and B∝ρ. We test the analytically derived density variance-Mach number relation with numerical simulations, and find that for B∝ρ1/2, the variance in the logarithmic density contrast, ?, fits very well to simulated data with turbulent forcing parameter b= 0.4, when the gas is super-Alfvénic. However, this result breaks down when the turbulence becomes trans-Alfvénic or sub-Alfvénic, because in this regime the turbulence becomes highly anisotropic. Our density variance-Mach number relations simplify to the purely hydrodynamic relation as the ratio of thermal to magnetic pressure β0→∞.

  9. Leptogenesis and residual CP symmetry

    International Nuclear Information System (INIS)

    Chen, Peng; Ding, Gui-Jun; King, Stephen F.

    2016-01-01

    We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.

  10. New applications of partial residual methodology

    International Nuclear Information System (INIS)

    Uslu, V.R.

    1999-12-01

    The formulation of a problem of interest in the framework of a statistical analysis starts with collecting the data, choosing a model, making certain assumptions as described in the basic paradigm by Box (1980). This stage is is called model building. Then the estimation stage is in order by pretending as if the formulation of the problem was true to obtain estimates, to make tests and inferences. In the final stage, called diagnostic checking, checking of whether there are some disagreements between the data and the model fitted is done by using diagnostic measures and diagnostic plots. It is well known that statistical methods perform best under the condition that all assumptions related to the methods are satisfied. However it is true that having the ideal case in practice is very difficult. Diagnostics are therefore becoming important so are diagnostic plots because they provide a immediate assessment. Partial residual plots that are the main interest of the present study are playing the major role among the diagnostic plots in multiple regression analysis. In statistical literature it is admitted that partial residual plots are more useful than ordinary residual plots in detecting outliers, nonconstant variance, and especially discovering curvatures. In this study we consider the partial residual methodology in statistical methods rather than multiple regression. We have shown that for the same purpose as in the multiple regression the use of partial residual plots is possible particularly in autoregressive time series models, transfer function models, linear mixed models and ridge regression. (author)

  11. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  12. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  13. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  14. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  15. Life prediction of steam generator tubing due to stress corrosion crack using Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Hu Jun; Liu Fei; Cheng Guangxu; Zhang Zaoxiao

    2011-01-01

    Highlights: → A life prediction model for SG tubing was proposed. → The initial crack length for SCC was determined. → Two failure modes called rupture mode and leak mode were considered. → A probabilistic life prediction code based on Monte Carlo method was developed. - Abstract: The failure of steam generator tubing is one of the main accidents that seriously affects the availability and safety of a nuclear power plant. In order to estimate the probability of the failure, a probabilistic model was established to predict the whole life-span and residual life of steam generator (SG) tubing. The failure investigated was stress corrosion cracking (SCC) after the generation of one through-wall axial crack. Two failure modes called rupture mode and leak mode based on probabilistic fracture mechanics were considered in this proposed model. It took into account the variance in tube geometry and material properties, and the variance in residual stresses and operating conditions, all of which govern the propagations of cracks. The proposed model was numerically calculated by using Monte Carlo Simulation (MCS). The plugging criteria were first verified and then the whole life-span and residual life of the SG tubing were obtained. Finally, important sensitivity analysis was also carried out to identify the most important parameters affecting the life of SG tubing. The results will be useful in developing optimum strategies for life-cycle management of the feedwater system in nuclear power plants.

  16. Validation of welded joint residual stress simulation

    International Nuclear Information System (INIS)

    Computational mechanics is being increasingly applied to predict the state of residual stress in welded joints for nuclear power plant applications. Motives for undertaking such calculations include optimising the design of welded joints and weld procedures, assessing the effectiveness of mitigation processes, providing more realistic inputs to structural integrity assessments and underwriting safety cases for operating nuclear power plant. Fusion welding processes involve intense localised heating to melt the surfaces to be joined and introduction of molten weld filler metal. A complex residual stress field develops at the weld through solidification, differential thermal contraction, cyclic thermal plasticity, phase transformation and chemical diffusion processes. The calculation of weld residual stress involves detailed non-linear analyses where many assumptions and approximations have to be made. In consequence, the accuracy and reliability of solutions can be highly variable. This paper illustrates the degree of variability that can arise in weld residual stress simulation results and summarises the new R6 guidelines which aim to improve the reliability and accuracy of computational predictions. The requirements for validating weld simulations are reviewed where residual stresses are to be used in fracture mechanics analysis. This includes a discussion of how to obtain and interpret measurements from mock-ups, benchmark weldments and published data. Benchmark weldments are described that illustrate some of the issues and show how validation of numerical prediction of weld residual stress can be achieved. Finally, plans for developing the weld modelling guidelines and associated benchmarks are outlined

  17. Residual gas analysis

    International Nuclear Information System (INIS)

    Berecz, I.

    1982-01-01

    Determination of the residual gas composition in vacuum systems by a special mass spectrometric method was presented. The quadrupole mass spectrometer (QMS) and its application in thin film technology was discussed. Results, partial pressure versus time curves as well as the line spectra of the residual gases in case of the vaporization of a Ti-Pd-Au alloy were demonstrated together with the possible construction schemes of QMS residual gas analysers. (Sz.J.)

  18. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.

  19. Variance and covariance calculations for nuclear materials accounting using ''PROFF''

    International Nuclear Information System (INIS)

    Stirpe, D.; Hafer, J.F.

    1986-01-01

    To determine the detection sensitivity of a materials accounting system to the loss of Special Nuclear Material (SNM) requires: (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for those measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. We have developed an interactive, menu-driven computer program, called PROFF (for PROcessing and Fuel Facilities), that considerably reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system. PROFF asks questions of the user to establish the form of each term in the materials balance equation, possible correlations between them, and whether the measured quantities are characterized by an additive or multiplicative error model. Then for each term of the materials balance equation, it presents the user with a menu that is to be completed with values of the SNM concentration, mass (or volume), measurement error standard deviations, and the number of measurements made during the accounting period. On completion of all the data menus, PROFF presents the variance of the materials balance and the square root of this variance, so that the sensitivity of the accounting system can be determined. PROFF is programmed in TURBO-PASCAL for micro-computers using MS-DOS 2.1 (IBM and compatibles)

  20. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  1. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  2. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    Let P be a set of points in R^d, and let M be a function that maps any subset of P to a positive real number. We examine the problem of computing the exact mean and variance of M when a subset of points in P is selected according to a well-defined random distribution. We consider two distributions...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...... distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  3. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  4. Agricultural pesticide residues

    International Nuclear Information System (INIS)

    Fuehr, F.

    1984-01-01

    The utilization of tracer techniques in the study of agricultural pesticide residues is reviewed under the following headings: lysimeter experiments, micro-ecosystems, translocation in soil, degradation of pesticides in soil, biological availability of soil-applied substances, bound residues in the soil, use of macro- and microautography, double and triple labelling, use of tracer labelling in animal experiments. (U.K.)

  5. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  6. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  7. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  8. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...

  9. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  10. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks.

    Science.gov (United States)

    Bouk, Safdar Hussain; Ahmed, Syed Hassan; Park, Kyung-Joon; Eun, Yongsoon

    2017-09-26

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node's depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15 % packet delivery ratio, propagates 50 % less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes.

  11. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  12. Infinite variance in fermion quantum Monte Carlo calculations

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  13. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  14. Asymptotics for Greeks under the constant elasticity of variance model

    OpenAIRE

    Kritski, Oleg L.; Zalmezh, Vladimir F.

    2017-01-01

    This paper is concerned with the asymptotics for Greeks of European-style options and the risk-neutral density function calculated under the constant elasticity of variance model. Formulae obtained help financial engineers to construct a perfect hedge with known behaviour and to price any options on financial assets.

  15. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  16. On zero variance Monte Carlo path-stretching schemes

    International Nuclear Information System (INIS)

    Lux, I.

    1983-01-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game

  17. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  18. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  19. Estimating Additive and Dominance Variance for Litter Traits in ...

    African Journals Online (AJOL)

    Abstract. Reproductive and growth records of 82 purebred California white kits were used to estimate additive and dominance genetic variances using BULPF90PC-PACK. ... The first model included fixed effects and random effects identifying inbreeding depression, additive gene effect and permanent environmental effects.

  20. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  1. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  2. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  3. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    V. Bentkus

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  4. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five- factor model Subject RIV: AN - Psychology Impact factor : 3.707, year: 2016

  5. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...

  6. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  7. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  8. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Eighty-eight (88) finger millet (Eleusine coracana (L.) Gaertn.) germplasm collections were tested using augmented randomized complete block design at Adet Agricultural Research Station in 2008 cropping season. The objective of this study was to find out heritability, variance components, variability and genetic advance ...

  9. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  10. Differences in mean fibre diameter and fibre diameter variance in ...

    African Journals Online (AJOL)

    sampled at five different body locations (Figure 1) at an age of 15 months. Samples were analysed by the Wool Testing. Bureaux, using an Optical Fibre Diameter Analyser which measured 4000 individual fibres in each sample. Apart from the mean, the variance of fibre diameter within samples was available. The statistical ...

  11. Variances in consumers prices of selected food items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  12. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...

  13. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  14. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  15. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  16. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  17. Effects of Diversification of Assets on Mean and Variance | Jayeola ...

    African Journals Online (AJOL)

    Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...

  18. Variance-based uncertainty relation for incompatible observers

    Science.gov (United States)

    Zheng, Xiao; Zhang, Guo-Feng

    2017-07-01

    Based on mixedness definition as M=1-{tr}( {ρ 2}) , we obtain a new variance-based uncertainty equality along with an inequality for Hermitian operators of a single-qubit system. The obtained uncertainty equality can be used as a measure of the system mixedness. A qubit system with feedback control is also exploited to demonstrate the new uncertainty.

  19. Variance component and heritability estimates for growth traits in the ...

    African Journals Online (AJOL)

    CANTET, R.J.C., KRESS, D.D., ANDERSON, D,C., DOORNBOS,. D.8., BURFENING, P.J. & BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on pre-weaning growth of beef cattle. J. Anint. Sci. 66,648. DEESE, R.E. & KOGER, M., 1967. Maternal effects on pre-weaning.

  20. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  1. Variances in consumers prices of selected food Items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  2. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  3. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  4. Direct and maternal variance component estimates for clean fleece ...

    African Journals Online (AJOL)

    Direct and maternal variance component estimates for clean fleece weight, body weight and mean frbre diameter in the Grootfontein Merino stud. J.J. Olivier. Grootfontein Agricultural Development Institute, Middelburg, Cape, 5900 Republic of South Africa. G.J. Erasmus, J.B. van Wyk and K.V. Konstantinov. Department of ...

  5. Variance components and genetic parameters for body weight and ...

    African Journals Online (AJOL)

    Variance components resulting from direct additive genetic effects, maternal additive genetic effects, maternal permanent environmental effects, as well as the relationship between direct and maternal genetic effects for several body weight and fleece traits, were estimated by DFREML procedures. Traits analysed included ...

  6. A variance ratio test of the Zambian foreign- exchange market

    African Journals Online (AJOL)

    Kirstam

    traders and other investors to earn higher-than-average market returns. 6 Key words: variance ratio tests, ... Apart from stocks and equities, foreign-exchange is a key component of the financial market ... Investment banks, commercial banks, local and multinational corporations, brokers and central banks are the major ...

  7. Automatic IMU sensor characterization using Allan variance plots

    Science.gov (United States)

    Skurowski, Przemysław; Paszkuta, Marcin

    2017-07-01

    We present an automatic method for the evaluation of the noise parameters of IMU devices. The method is a the two-stage optimization problem for polyline regression of the Allan variance in log-log domain. We address the initialization issue and segmentation to identify the existing noises which results in the robustness of the results obtained with the numerical solver.

  8. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k eff estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  9. Surgical treatment for residual or recurrent strabismus

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2014-12-01

    Full Text Available Although the surgical treatment is a relatively effective and predictable method for correcting residual or recurrent strabismus, such as posterior fixation sutures, medial rectus marginal myotomy, unilateral or bilateral rectus re-recession and resection, unilateral lateral rectus recession and adjustable suture, no standard protocol is established for the surgical style. Different surgical approaches have been recommended for correcting residual or recurrent strabismus. The choice of the surgical procedure depends on the former operation pattern and the surgical dosages applied on the patients, residual or recurrent angle of deviation and the operator''s preference and experience. This review attempts to outline recent publications and current opinion in the management of residual or recurrent esotropia and exotropia.

  10. Handling of Solid Residues

    International Nuclear Information System (INIS)

    Medina Bermudez, Clara Ines

    1999-01-01

    The topic of solid residues is specifically of great interest and concern for the authorities, institutions and community that identify in them a true threat against the human health and the atmosphere in the related with the aesthetic deterioration of the urban centers and of the natural landscape; in the proliferation of vectorial transmitters of illnesses and the effect on the biodiversity. Inside the wide spectrum of topics that they keep relationship with the environmental protection, the inadequate handling of solid residues and residues dangerous squatter an important line in the definition of political and practical environmentally sustainable. The industrial development and the population's growth have originated a continuous increase in the production of solid residues; of equal it forms, their composition day after day is more heterogeneous. The base for the good handling includes the appropriate intervention of the different stages of an integral administration of residues, which include the separation in the source, the gathering, the handling, the use, treatment, final disposition and the institutional organization of the administration. The topic of the dangerous residues generates more expectation. These residues understand from those of pathogen type that are generated in the establishments of health that of hospital attention, until those of combustible, inflammable type, explosive, radio-active, volatile, corrosive, reagent or toxic, associated to numerous industrial processes, common in our countries in development

  11. Variance risk premia in CO2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO 2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO 2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO 2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO 2 markets. •Clear policy implications for regulators to most effectively cap the overall CO 2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO 2 asset returns by using variance risk premia.

  12. Variance: An Under-Appreciated Parameter in Marine Climate Change Ecology (Invited)

    Science.gov (United States)

    Sydeman, W. J.; Schroeder, I. D.; Thompson, S.; Black, B. A.; Largier, J. L.; Garcia-Reyes, M.; Bograd, S. J.; Santora, J.

    2010-12-01

    Upwelling is a fundamental process forcing biological productivity in many coastal ecosystems globally. Moderate (or pulsed) upwelling is thought to promote optimal ecosystem productivity, but upwelling is predicted to intensify as a result of global warming. Excessive upwelling can lead to advection of plankton off the continental shelf, thereby limiting coastal productivity. For an ecologically-sensitive and economically-critical region, the greater Gulf of the Farallones, California, we have conducted retrospective studies to determine how the timing, magnitude and variability in upwelling has changed and affected the ecosystem from phytoplankton to top predators (seabirds and salmon). In accordance with theory, we found increases in the magnitude of upwelling (or proxies thereof) for this region, but more significantly we found changes in the variance in upwelling on quasi-decadal scales, as well as secular changes in the variability of local biological populations. To date, climate change ecology has focused, primarily, on assessing changes in the central tendency of physical and biological parameters, whereas focus on variability (or variance) may be equally revealing and important; indeed, it is well known from population ecology that populations may decline if variability increases, yet average “state” remains constant. We contend that documenting and attributing trends in the variance of bio-physical processes is a critical next step for a comprehensive understanding of climate change impacts on marine ecosystems.

  13. Effects of Sex on Intra-Individual Variance in Urinary Solutes in Stone-Formers Collected from a Single Clinical Laboratory.

    Directory of Open Access Journals (Sweden)

    Guy M L Perry

    Full Text Available Our work in a rodent model of urinary calcium suggests genetic and gender effects on increased residual variability in urine chemistries. Based on these findings, we hypothesized that sex would similarly be associated with residual variation in human urine solutes. Sex-related effects on residuals might affect the establishment of physiological baselines and error in medical assays.We tested the effects of sex on residual variation in urine chemistry by estimating coefficients of variation (CV for urinary solutes in paired sequential 24-h urines (≤72 hour interval in 6,758 females and 9,024 males aged 16-80 submitted to a clinical laboratory.Females had higher CVs than males for urinary phosphorus overall at the False Discovery Rate (P0.3. Males had higher CVs for citrate (P<0.01 from ages 16-45 and females higher CVs for citrate (P<0.01 from ages 56-80, suggesting effects of an extant oestral cycle on residual variance.Our findings indicate the effects of sex on residual variance of the excretion of urinary solutes including phosphorus and citrate; differences in CV by sex might reflect dietary lability, differences in the fidelity of reporting or genetic differentiation in renal solute consistency. Such an effect could complicate medical analysis by the addition of random error to phenotypic assays. Renal analysis might require explicit incorporation of heterogeneity among factorial effects, and for sex in particular.

  14. Residual-strength determination in polymetric materials

    International Nuclear Information System (INIS)

    Christensen, R.M.

    1981-01-01

    Kinetic theory of crack growth is used to predict the residual strength of polymetric materials acted upon by a previous history. Specifically, the kinetic theory is used to characterize the state of growing damage that occurs under a constant-stress (load) state. The load is removed before failure under creep-rupture conditions, and the residual instantaneous strength is determined from the theory by taking account of the damage accumulation under the preceding constant-load history. The rate of change of residual strength is found to be strongest when the duration of the preceding load history is near the ultimate lifetime under that condition. Physical explanations for this effect are given, as are numerical examples. Also, the theoretical prediction is compared with experimental data

  15. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  16. [Residual neuromuscular blockade].

    Science.gov (United States)

    Fuchs-Buder, T; Schmartz, D

    2017-06-01

    Even small degrees of residual neuromuscular blockade, i. e. a train-of-four (TOF) ratio >0.6, may lead to clinically relevant consequences for the patient. Especially upper airway integrity and the ability to swallow may still be markedly impaired. Moreover, increasing evidence suggests that residual neuromuscular blockade may affect postoperative outcome of patients. The incidence of these small degrees of residual blockade is relatively high and may persist for more than 90 min after a single intubating dose of an intermediately acting neuromuscular blocking agent, such as rocuronium and atracurium. Both neuromuscular monitoring and pharmacological reversal are key elements for the prevention of postoperative residual blockade.

  17. TENORM: Wastewater Treatment Residuals

    Science.gov (United States)

    Water and wastes which have been discharged into municipal sewers are treated at wastewater treatment plants. These may contain trace amounts of both man-made and naturally occurring radionuclides which can accumulate in the treatment plant and residuals.

  18. Student, teacher, and classroom predictors of between-teacher variance of students' teacher-rated behavior.

    Science.gov (United States)

    Splett, Joni W; Smith-Millman, Marissa; Raborn, Anthony; Brann, Kristy L; Flaspohler, Paul D; Maras, Melissa A

    2018-03-08

    The current study examined between-teacher variance in teacher ratings of student behavioral and emotional risk to identify student, teacher and classroom characteristics that predict such differences and can be considered in future research and practice. Data were taken from seven elementary schools in one school district implementing universal screening, including 1,241 students rated by 68 teachers. Students were mostly African America (68.5%) with equal gender (female 50.1%) and grade-level distributions. Teachers, mostly White (76.5%) and female (89.7%), completed both a background survey regarding their professional experiences and demographic characteristics and the Behavior Assessment System for Children (Second Edition) Behavioral and Emotional Screening System-Teacher Form for all students in their class, rating an average of 17.69 students each. Extant student data were provided by the district. Analyses followed multilevel linear model stepwise model-building procedures. We detected a significant amount of variance in teachers' ratings of students' behavioral and emotional risk at both student and teacher/classroom levels with student predictors explaining about 39% of student-level variance and teacher/classroom predictors explaining about 20% of between-teacher differences. The final model fit the data (Akaike information criterion = 8,687.709; pseudo-R2 = 0.544) significantly better than the null model (Akaike information criterion = 9,457.160). Significant predictors included student gender, race ethnicity, academic performance and disciplinary incidents, teacher gender, student-teacher gender interaction, teacher professional development in behavior screening, and classroom academic performance. Future research and practice should interpret teacher-rated universal screening of students' behavioral and emotional risk with consideration of the between-teacher variance unrelated to student behavior detected. (PsycINFO Database Record (c) 2018 APA, all

  19. Residuation in orthomodular lattices

    Directory of Open Access Journals (Sweden)

    Chajda Ivan

    2017-04-01

    Full Text Available We show that every idempotent weakly divisible residuated lattice satisfying the double negation law can be transformed into an orthomodular lattice. The converse holds if adjointness is replaced by conditional adjointness. Moreover, we show that every positive right residuated lattice satisfying the double negation law and two further simple identities can be converted into an orthomodular lattice. In this case, also the converse statement is true and the corresponence is nearly one-to-one.

  20. Characterization of Hospital Residuals

    International Nuclear Information System (INIS)

    Blanco Meza, A.; Bonilla Jimenez, S.

    1997-01-01

    The main objective of this investigation is the characterization of the solid residuals. A description of the handling of the liquid and gassy waste generated in hospitals is also given, identifying the source where they originate. To achieve the proposed objective the work was divided in three stages: The first one was the planning and the coordination with each hospital center, in this way, to determine the schedule of gathering of the waste can be possible. In the second stage a fieldwork was made; it consisted in gathering the quantitative and qualitative information of the general state of the handling of residuals. In the third and last stage, the information previously obtained was organized to express the results as the production rate per day by bed, generation of solid residuals for sampled services, type of solid residuals and density of the same ones. With the obtained results, approaches are settled down to either determine design parameters for final disposition whether for incineration, trituration, sanitary filler or recycling of some materials, and storage politics of the solid residuals that allow to determine the gathering frequency. The study concludes that it is necessary to improve the conditions of the residuals handling in some aspects, to provide the cleaning personnel of the equipment for gathering disposition and of security, minimum to carry out this work efficiently, and to maintain a control of all the dangerous waste, like sharp or polluted materials. In this way, an appreciable reduction is guaranteed in the impact on the atmosphere. (Author) [es