Bian, Zunjian; du, yongming; li, hua
2016-04-01
Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible
variance components and genetic parameters for live weight
African Journals Online (AJOL)
admin
Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Variance components for body weight in Japanese quails (Coturnix japonica
Directory of Open Access Journals (Sweden)
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Modelling volatility by variance decomposition
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Vink, J.M.; Boomsma, D.I.; Medland, S.E.; Moor, H.M. de; Stubbe, J.H.; Corner, B.K.; Martin, N.G.; Skytthea, A.; Kyvik, K.O.; Rose, R..J.; Kujala, U.M.; Kaprio, J.; Harris, J.R.; Pedersen, N.L.; Cherkas, L.; Spector, T.D.; Geus, E.J.
2011-01-01
Physical activity is influenced by genetic factors whose expression may change with age. We employed an extension to the classical twin model that allows a modifier variable, age, to interact with the effects of the latent genetic and environmental factors. The model was applied to self-reported
DEFF Research Database (Denmark)
Vink, Jacqueline M; Boomsma, Dorret I; Medland, Sarah E
2011-01-01
-reported data from twins aged 19 to 50 from seven countries that collaborated in the GenomEUtwin project: Australia, Denmark, Finland, Norway, Netherlands, Sweden and United Kingdom. Results confirmed the importance of genetic influences on physical activity in all countries and showed an age-related decrease......Physical activity is influenced by genetic factors whose expression may change with age. We employed an extension to the classical twin model that allows a modifier variable, age, to interact with the effects of the latent genetic and environmental factors. The model was applied to self...... into account when exploring the genetic and environmental contribution to physical activity. It also suggests that the power of genome-wide association studies to identify the genetic variants contributing to physical activity may be larger in young adult cohorts....
Variance Component Selection With Applications to Microbiome Taxonomic Data
Directory of Open Access Journals (Sweden)
Jing Zhai
2018-03-01
Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
Heritability, variance components and genetic advance of some ...
African Journals Online (AJOL)
Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Directory of Open Access Journals (Sweden)
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
VARIANCE COMPONENTS AND SELECTION FOR FEATHER PECKING BEHAVIOR IN LAYING HENS
Su, Guosheng; Kjaer, Jørgen B.; Sørensen, Poul
2005-01-01
Variance components and selection response for feather pecking behaviour were studied by analysing the data from a divergent selection experiment. An investigation show that a Box-Cox transformation with power =-0.2 made the data be approximately normally distributed and fit best by the given model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the two traits in both low feather peckin...
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
An elementary components of variance analysis for multi-center quality control
International Nuclear Information System (INIS)
Munson, P.J.; Rodbard, D.
1977-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality control (QC) studies. Statistical analysis methods for such studies using an 'analysis of variance with components of variance estimation' are discussed. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Components of variance analysis also provides an intelligent way to combine the results of several QC samples run at different evels, from which we may decide if any component varies systematically with dose level; if not, pooling of estimates becomes possible. We consider several possible relationships of standard deviation to the laboratory mean. Each relationship corresponds to an underlying statistical model, and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine if an appropriate model has been chosen, although the exact functional relationship of standard deviation to lab mean may be difficult to establish. Appropriate graphical display of the data aids in visual understanding of the data. A plot of the ranked standard deviation vs. ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean. (orig.) [de
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)
2016-09-15
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
International Nuclear Information System (INIS)
Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.
Principal component approach in variance component estimation for international sire evaluation
Directory of Open Access Journals (Sweden)
Jakobsen Jette
2011-05-01
Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in
Variance and covariance components for liability of piglet survival during different periods
DEFF Research Database (Denmark)
Su, G; Sorensen, D; Lund, M S
2008-01-01
Variance and covariance components for piglet survival in different periods were estimated from individual records of 133 004 Danish Landrace piglets and 89 928 Danish Yorkshire piglets, using a liability threshold model including both direct and maternal additive genetic effects. At the individu...
Improving precision in gel electrophoresis by stepwisely decreasing variance components.
Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann
2009-10-15
Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.
Variability of indoor and outdoor VOC measurements: An analysis using variance components
International Nuclear Information System (INIS)
Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.
2012-01-01
This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. - Highlights: ► The variability of VOC measurements was partitioned using nested analysis. ► Indoor VOCs were primarily controlled by seasonal and residence effects. ► Outdoor VOC levels were homogeneous within neighborhoods. ► Measurement uncertainty was high for many outdoor VOCs. ► Variance component analysis is useful for designing effective sampling programs. - Indoor VOC concentrations were primarily controlled by seasonal and residence effects; and outdoor concentrations were homogeneous within neighborhoods. Variance component analysis is a useful tool for designing effective sampling programs.
An elementary components of variance analysis for multi-centre quality control
International Nuclear Information System (INIS)
Munson, P.J.; Rodbard, D.
1978-01-01
The serious variability of RIA results from different laboratories indicates the need for multi-laboratory collaborative quality-control (QC) studies. Simple graphical display of data in the form of histograms is useful but insufficient. The paper discusses statistical analysis methods for such studies using an ''analysis of variance with components of variance estimation''. This technique allocates the total variance into components corresponding to between-laboratory, between-assay, and residual or within-assay variability. Problems with RIA data, e.g. severe non-uniformity of variance and/or departure from a normal distribution violate some of the usual assumptions underlying analysis of variance. In order to correct these problems, it is often necessary to transform the data before analysis by using a logarithmic, square-root, percentile, ranking, RIDIT, ''Studentizing'' or other transformation. Ametric transformations such as ranks or percentiles protect against the undue influence of outlying observations, but discard much intrinsic information. Several possible relationships of standard deviation to the laboratory mean are considered. Each relationship corresponds to an underlying statistical model and an appropriate analysis technique. Tests for homogeneity of variance may be used to determine whether an appropriate model has been chosen, although the exact functional relationship of standard deviation to laboratory mean may be difficult to establish. Appropriate graphical display aids visual understanding of the data. A plot of the ranked standard deviation versus ranked laboratory mean is a convenient way to summarize a QC study. This plot also allows determination of the rank correlation, which indicates a net relationship of variance to laboratory mean
Genetic variance components for residual feed intake and feed ...
African Journals Online (AJOL)
Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
Mike
2013-03-09
Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...
Variance component and heritability estimates of early growth traits ...
African Journals Online (AJOL)
as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
Feed efficiency is of major economic importance in beef production. The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait ...
Variance components and genetic parameters for body weight and ...
African Journals Online (AJOL)
model included a direct as well as a maternal additive genetic effect, while only the direct additive genetic eff'ect had a sig- .... deviations from the log likelihood value obtained under the ... (1995).lt would therefore be fair to assume that a.
An Empirical Temperature Variance Source Model in Heated Jets
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Variance components estimation for farrowing traits of three purebred pigs in Korea
Directory of Open Access Journals (Sweden)
Bryan Irvine Lopez
2017-09-01
Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.
Variance components and selection response for feather-pecking behavior in laying hens.
Su, G; Kjaer, J B; Sørensen, P
2005-01-01
Variance components and selection response for feather pecking behavior were studied by analyzing the data from a divergent selection experiment. An investigation indicated that a Box-Cox transformation with power lambda = -0.2 made the data approximately normally distributed and gave the best fit for the model. Variance components and selection response were estimated using Bayesian analysis with Gibbs sampling technique. The total variation was rather large for the investigated traits in both the low feather-pecking line (LP) and the high feather-pecking line (HP). Based on the mean of marginal posterior distribution, in the Box-Cox transformed scale, heritability for number of feather pecking bouts (FP bouts) was 0.174 in line LP and 0.139 in line HP. For number of feather-pecking pecks (FP pecks), heritability was 0.139 in line LP and 0.105 in line HP. No full-sib group effect and observation pen effect were found in the 2 traits. After 4 generations of selection, the total response for number of FP bouts in the transformed scale was 58 and 74% of the mean of the first generation in line LP and line HP, respectively. The total response for number of FP pecks was 47 and 46% of the mean of the first generation in line LP and line HP, respectively. The variance components and the realized selection response together suggest that genetic selection can be effective in minimizing FP behavior. This would be expected to reduce one of the major welfare problems in laying hens.
Identifying misbehaving models using baseline climate variance
Schultz, Colin
2011-06-01
The majority of projections made using general circulation models (GCMs) are conducted to help tease out the effects on a region, or on the climate system as a whole, of changing climate dynamics. Sun et al., however, used model runs from 20 different coupled atmosphere-ocean GCMs to try to understand a different aspect of climate projections: how bias correction, model selection, and other statistical techniques might affect the estimated outcomes. As a case study, the authors focused on predicting the potential change in precipitation for the Murray-Darling Basin (MDB), a 1-million- square- kilometer area in southeastern Australia that suffered a recent decade of drought that left many wondering about the potential impacts of climate change on this important agricultural region. The authors first compared the precipitation predictions made by the models with 107 years of observations, and they then made bias corrections to adjust the model projections to have the same statistical properties as the observations. They found that while the spread of the projected values was reduced, the average precipitation projection for the end of the 21st century barely changed. Further, the authors determined that interannual variations in precipitation for the MDB could be explained by random chance, where the precipitation in a given year was independent of that in previous years.
Variance Function Partially Linear Single-Index Models1.
Lian, Heng; Liang, Hua; Carroll, Raymond J
2015-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.
Variance-based sensitivity indices for models with dependent inputs
International Nuclear Information System (INIS)
Mara, Thierry A.; Tarantola, Stefano
2012-01-01
Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....
Multilevel Modeling of the Performance Variance
Directory of Open Access Journals (Sweden)
Alexandre Teixeira Dias
2012-12-01
Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.
DEFF Research Database (Denmark)
Krag, Kristian
The composition of bovine milk fat, used for human consumption, is far from the recommendations for human fat nutrition. The aim of this PhD was to describe the variance components and prediction probabilities of individual fatty acids (FA) in bovine milk, and to evaluate the possibilities...
Thermospheric mass density model error variance as a function of time scale
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Variance squeezing and entanglement of the XX central spin model
International Nuclear Information System (INIS)
El-Orany, Faisal A A; Abdalla, M Sebawe
2011-01-01
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Variance squeezing and entanglement of the XX central spin model
Energy Technology Data Exchange (ETDEWEB)
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach
Feistauer, Daniela; Richter, Tobias
2017-01-01
The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...
Variance-based sensitivity analysis for wastewater treatment plant modelling.
Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B
2014-02-01
Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.
Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D
2013-07-03
Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
International Nuclear Information System (INIS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-01-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.
Yang, Ye; Christensen, Ole F; Sorensen, Daniel
2011-02-01
Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation
DEFF Research Database (Denmark)
Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel
2011-01-01
of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Heritability and variance components of some morphological and agronomic in alfalfa
International Nuclear Information System (INIS)
Ates, E.; Tekeli, S.
2005-01-01
Four alfalfa cultivars were investigated using randomized complete-block design with three replications. Variance components, variance coefficients and heritability values of some morphological characters, herbage yield, dry matter yield and seed yield were determined. Maximum main stem height (78.69 cm), main stem diameter (4.85 mm), leaflet width (0.93 cm), seeds/pod (6.57), herbage yield (75.64 t ha/sub -1/), dry matter yield (20.06 t ha/sub -1/) and seed yield (0.49 t ha/sub -1/) were obtained from cv. Marina. Leaflet length varied from 1.65 to 2.08 cm. The raceme length measured 3.15 to 4.38 cm in alfalfa cultivars. The highest 1000-seeds weight values (2.42-2.49 g) were found from Marina and Sitel cultivars. Heritability values of various traits were: 91.0% for main stem height, 97.6% for main stem diameter, 81.8% for leaflet length, 88.8% for leaflet width, 90.4% for leaf/stem ratio, 28.3% for racemes/main stem, 99.0% for raceme length, 99.2% for seeds/pod, 88.0% for 1000-seeds weight, 97.2% for herbage yield, 99.6% for dry matter yield and 95.4% for seed yield. (author)
Directory of Open Access Journals (Sweden)
Haley Christopher S
2009-01-01
Full Text Available Abstract Introduction Variance component QTL methodology was used to analyse three candidate regions on chicken chromosomes 1, 4 and 5 for dominant and parent-of-origin QTL effects. Data were available for bodyweight and conformation score measured at 40 days from a two-generation commercial broiler dam line. One hundred dams were nested in 46 sires with phenotypes and genotypes on 2708 offspring. Linear models were constructed to simultaneously estimate fixed, polygenic and QTL effects. Different genetic models were compared using likelihood ratio test statistics derived from the comparison of full with reduced or null models. Empirical thresholds were derived by permutation analysis. Results Dominant QTL were found for bodyweight on chicken chromosome 4 and for bodyweight and conformation score on chicken chromosome 5. Suggestive evidence for a maternally expressed QTL for bodyweight and conformation score was found on chromosome 1 in a region corresponding to orthologous imprinted regions in the human and mouse. Conclusion Initial results suggest that variance component analysis can be applied within commercial populations for the direct detection of segregating dominant and parent of origin effects.
MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE
Directory of Open Access Journals (Sweden)
I GEDE ERY NISCAHYANA
2016-08-01
Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution. The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of BNLI stock, 0% of SMDM stock, 1% of SMGR stock.
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M
2008-07-23
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
International Nuclear Information System (INIS)
Sinclair, D.F.; Williams, J.
1979-01-01
There have been significant developments in the design and use of neutron moisture meters since Hewlett et al.(1964) investigated the sources of variance when using this instrument to estimate soil moisture. There appears to be little in the literature, however, which updates these findings. This paper aims to isolate the components of variance when moisture content and moisture change are estimated using the neutron scattering method with current technology and methods
DEFF Research Database (Denmark)
Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke
Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...
Directory of Open Access Journals (Sweden)
Muhammad Cahyadi
2016-01-01
Full Text Available Quantitative trait locus (QTL is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC. F1 samples (n = 595 were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3 for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001 and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003. Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007 and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027 were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds.
Energy Technology Data Exchange (ETDEWEB)
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
Modelling Changes in the Unconditional Variance of Long Stock Return Series
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...... horizons for a subset of the long return series....
Modelling changes in the unconditional variance of long stock return series
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
2014-01-01
In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...... that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight...... subsets of the long return series....
Fields, Christina M.
2013-01-01
The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.
Directory of Open Access Journals (Sweden)
Poivey Jean-Paul
2011-09-01
Full Text Available Abstract Background The pre-weaning growth rate of lambs, an important component of meat market production, is affected by maternal and direct genetic effects. The French genetic evaluation model takes into account the number of lambs suckled by applying a multiplicative factor (1 for a lamb reared as a single, 0.7 for twin-reared lambs to the maternal genetic effect, in addition to including the birth*rearing type combination as a fixed effect, which acts on the mean. However, little evidence has been provided to justify the use of this multiplicative model. The two main objectives of the present study were to determine, by comparing models of analysis, 1 whether pre-weaning growth is the same trait in single- and twin-reared lambs and 2 whether the multiplicative coefficient represents a good approach for taking this possible difference into account. Methods Data on the pre-weaning growth rate, defined as the average daily gain from birth to 45 days of age on 29,612 Romane lambs born between 1987 and 2009 at the experimental farm of La Sapinière (INRA-France were used to compare eight models that account for the number of lambs per dam reared in various ways. Models were compared using the Akaike information criteria. Results The model that best fitted the data assumed that 1 direct (maternal effects correspond to the same trait regardless of the number of lambs reared, 2 the permanent environmental effects and variances associated with the dam depend on the number of lambs reared and 3 the residual variance depends on the number of lambs reared. Even though this model fitted the data better than a model that included a multiplicative coefficient, little difference was found between EBV from the different models (the correlation between EBV varied from 0.979 to 0.999. Conclusions Based on experimental data, the current genetic evaluation model can be improved to better take into account the number of lambs reared. Thus, it would be of
Impact of Damping Uncertainty on SEA Model Response Variance
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Directory of Open Access Journals (Sweden)
Matheus Costa dos Reis
2014-01-01
Full Text Available This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0 and in the third cycle (C3 of reciprocal recurrent selection (RRS which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22 and interpopulation (P12 and P21 from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10×10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2 and the covariance between these and their intrapopulation additive effects (CovAτ found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs.
A Visual Model for the Variance and Standard Deviation
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this
International Nuclear Information System (INIS)
Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen; Halekoh, Ulrich; Høilund-Carlsen, Poul Flemming
2016-01-01
Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-Altman plot. We suggest variance component analysis (VCA) to estimate the influence of errors due to single elements of a PET scan (scanner, time point, observer, etc.) to express the composite uncertainty of repeated measurements and obtain relevant repeatability coefficients (RCs) which have a unique relation to Bland-Altman plots. Here, we present this approach for assessment of intra- and inter-observer variation with PET/CT exemplified with data from two clinical studies. In study 1, 30 patients were scanned pre-operatively for the assessment of ovarian cancer, and their scans were assessed twice by the same observer to study intra-observer agreement. In study 2, 14 patients with glioma were scanned up to five times. Resulting 49 scans were assessed by three observers to examine inter-observer agreement. Outcome variables were SUVmax in study 1 and cerebral total hemispheric glycolysis (THG) in study 2. In study 1, we found a RC of 2.46 equalling half the width of the Bland-Altman limits of agreement. In study 2, the RC for identical conditions (same scanner, patient, time point, and observer) was 2392; allowing for different scanners increased the RC to 2543. Inter-observer differences were negligible compared to differences owing to other factors; between observer 1 and 2: −10 (95 % CI: −352 to 332) and between observer 1 vs 3: 28 (95 % CI: −313 to 370). VCA is an appealing approach for weighing different sources of variation against each other, summarised as RCs. The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components. The online version of this article (doi:10
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Directory of Open Access Journals (Sweden)
Gebregziabher Gebreyohannes
2013-09-01
Full Text Available The objective of this study was to estimate variance components and genetic parameters for lactation milk yield (LY, lactation length (LL, average milk yield per day (YD, initial milk yield (IY, peak milk yield (PY, days to peak (DP and parameters (ln(a and c of the modified incomplete gamma function (MIG in an Ethiopian multibreed dairy cattle population. The dataset was composed of 5,507 lactation records collected from 1,639 cows in three locations (Bako, Debre Zeit and Holetta in Ethiopia from 1977 to 2010. Parameters for MIG were obtained from regression analysis of monthly test-day milk data on days in milk. The cows were purebred (Bos indicus Boran (B and Horro (H and their crosses with different fractions of Friesian (F, Jersey (J and Simmental (S. There were 23 breed groups (B, H, and their crossbreds with F, J, and S in the population. Fixed and mixed models were used to analyse the data. The fixed model considered herd-year-season, parity and breed group as fixed effects, and residual as random. The single and two-traits mixed animal repeatability models, considered the fixed effects of herd-year-season and parity subclasses, breed as a function of cow H, F, J, and S breed fractions and general heterosis as a function of heterozygosity, and the random additive animal, permanent environment, and residual effects. For the analysis of LY, LL was added as a fixed covariate to all models. Variance components and genetic parameters were estimated using average information restricted maximum likelihood procedures. The results indicated that all traits were affected (p<0.001 by the considered fixed effects. High grade B×F cows (3/16B 13/16F had the highest least squares means (LSM for LY (2,490±178.9 kg, IY (10.5±0.8 kg, PY (12.7±0.9 kg, YD (7.6±0.55 kg and LL (361.4±31.2 d, while B cows had the lowest LSM values for these traits. The LSM of LY, IY, YD, and PY tended to increase from the first to the fifth parity. Single-trait analyses
Fan, Weihua; Hancock, Gregory R.
2012-01-01
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Terasvirta, Timo
The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....
Donoghue, K A; Bird-Gardiner, T; Arthur, P F; Herd, R M; Hegarty, R F
2016-04-01
Ruminants contribute 80% of the global livestock greenhouse gas (GHG) emissions mainly through the production of methane, a byproduct of enteric microbial fermentation primarily in the rumen. Hence, reducing enteric methane production is essential in any GHG emissions reduction strategy in livestock. Data on 1,046 young bulls and heifers from 2 performance-recording research herds of Angus cattle were analyzed to provide genetic and phenotypic variance and covariance estimates for methane emissions and production traits and to examine the interrelationships among these traits. The cattle were fed a roughage diet at 1.2 times their estimated maintenance energy requirements and measured for methane production rate (MPR) in open circuit respiration chambers for 48 h. Traits studied included DMI during the methane measurement period, MPR, and methane yield (MY; MPR/DMI), with means of 6.1 kg/d (SD 1.3), 132 g/d (SD 25), and 22.0 g/kg (SD 2.3) DMI, respectively. Four forms of residual methane production (RMP), which is a measure of actual minus predicted MPR, were evaluated. For the first 3 forms, predicted MPR was calculated using published equations. For the fourth (RMP), predicted MPR was obtained by regression of MPR on DMI. Growth and body composition traits evaluated were birth weight (BWT), weaning weight (WWT), yearling weight (YWT), final weight (FWT), and ultrasound measures of eye muscle area, rump fat depth, rib fat depth, and intramuscular fat. Heritability estimates were moderate for MPR (0.27 [SE 0.07]), MY (0.22 [SE 0.06]), and the RMP traits (0.19 [SE 0.06] for each), indicating that genetic improvement to reduce methane emissions is possible. The RMP traits and MY were strongly genetically correlated with each other (0.99 ± 0.01). The genetic correlation of MPR with MY as well as with the RMP traits was moderate (0.32 to 0.63). The genetic correlation between MPR and the growth traits (except BWT) was strong (0.79 to 0.86). These results indicate that
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-01
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent
Directory of Open Access Journals (Sweden)
T.C.C. Bittencourt
2002-06-01
Full Text Available Foram estimados parâmetros genéticos, fenotípicos e valores genéticos de pesos padronizados aos 365 (P365 e 455 (P455 dias de idade de animais pertencentes ao programa de melhoramento genético da raça Nelore, desenvolvido pelo Departamento de Genética da USP. Quatro modelos foram utilizados para obter estimativas de parâmetros genéticos REML: o modelo 1 incluiu apenas os efeitos genético direto e residual; o 2, incluiu o efeito de ambiente permanente e os efeitos incluídos no modelo 1; o modelo 3 incluiu o efeito genético materno e os efeitos incluídos no modelo 1; o modelo 4 é o completo, com a inclusão dos efeitos genéticos direto e materno e de ambiente permanente. Para P365, as herdabilidades obtidas foram: 0,48, 0,32, 0,28 e 0,27 para os modelos 1, 2, 3 e 4, respectivamente. Para P455, os valores observados foram: 0,48, 0,38, 0,35 e 0,34 para os modelos 1, 2, 3 e 4, respectivamente. A comparação entre os modelos indicou que os efeitos maternos não foram importantes na variação do P455, mas podem ter alguma importância no peso aos 365 dias de idade.Data from the Genetic Improvement Program of the Nellore Breed of Genetic Department-USP were used to estimate genetic parameters and breeding values for weights at 365 (P365 and 455 (P455 days of age. Four animal models were used to obtain REML estimates of genetic parameters aiming to evaluate the effect of the inclusion of a random maternal genetic effect and a permanent environmental effect on variance component estimates. The model 1 included genetic and residual random effects; model 2 and model 3 were based on model 1 but included permanent environmental (2 and maternal genetic (3 effects; model 4 included genetic, maternal and permanent environmental effects. The heritability estimates for P365 were 0.48, 0.32, 0.28 and 0.27 using models 1, 2, 3 and 4, respectively. For P455, the values were 0.48, 0.38, 0.35 e 0.34 with the same models. The results suggest that
Variance component estimation with longitudinal data: a simulation study with alternative methods
Directory of Open Access Journals (Sweden)
Simone Inoe Araujo
2009-01-01
Full Text Available A pedigree structure distributed in three different places was generated. For each offspring, phenotypicinformation was generated for five different ages (12, 30, 48, 66 and 84 months. The data file was simulated allowing someinformation to be lost (10, 20, 30 and 40% by a random process and by selecting the ones with lower phenotypic values,representing the selection effect. Three alternative analysis were used, the repeatability model, random regression model andmultiple-trait model. Random regression showed to be more adequate to continually describe the covariance structure ofgrowth over time than single-trait and repeatability models, when the assumption of a correlation between successivemeasurements in the same individual was different from one another. Without selection, random regression and multiple-traitmodels were very similar.
Röring, Johan
2017-01-01
Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...
Measurement error models with uncertainty about the error variance
Oberski, D.L.; Satorra, A.
2013-01-01
It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing
Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.
Weaver, Bruce; Black, Ryan A
2015-06-01
Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.
Variance components for direct and maternal effects on body weights of Katahdin lambs
The aim of this study was to estimate genetic parameters for BW in Katahdin lambs. Six animal models were used to study direct and maternal effects on birth (BWT), weaning (WWT) and postweaning (PWWT) weights using 41,066 BWT, 33,980 WWT, and 22,793 PWWT records collected over 17 yr in 100 flocks. F...
Prediction-error variance in Bayesian model updating: a comparative study
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model
Variance, Violence, and Democracy: A Basic Microeconomic Model of Terrorism
Directory of Open Access Journals (Sweden)
John A. Sautter
2010-01-01
Full Text Available Much of the debate surrounding contemporary studies of terrorism focuses upon transnational terrorism. However, historical and contemporary evidence suggests that domestic terrorism is a more prevalent and pressing concern. A formal microeconomic model of terrorism is utilized here to understand acts of political violence in a domestic context within the domain of democratic governance.This article builds a very basic microeconomic model of terrorist decision making to hypothesize how a democratic government might influence the sorts of strategies that terrorists use. Mathematical models have been used to explain terrorist behavior in the past. However, the bulk of inquires in this area have only focused on the relationship between terrorists and the government, or amongst terrorists themselves. Central to the interpretation of the terrorist conflict presented here is the idea that voters (or citizens are also one of the important determinants of how a government will respond to acts of terrorism.
The Semiparametric Normal Variance-Mean Mixture Model
DEFF Research Database (Denmark)
Korsholm, Lars
1997-01-01
We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
Use of genomic models to study genetic control of environmental variance
DEFF Research Database (Denmark)
Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel
2011-01-01
. The genomic model commonly found in the literature, with marker effects affecting mean only, is extended to investigate putative effects at the level of the environmental variance. Two classes of models are proposed and their behaviour, studied using simulated data, indicates that they are capable...... of detecting genetic variation at the level of mean and variance. Implementation is via Markov chain Monte Carlo (McMC) algorithms. The models are compared in terms of a measure of global fit, in their ability to detect QTL effects and in terms of their predictive power. The models are subsequently fitted...... to back fat thickness data in pigs. The analysis of back fat thickness shows that the data support genomic models with effects on the mean but not on the variance. The relative sizes of experiment necessary to detect effects on mean and variance is discussed and an extension of the McMC algorithm...
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Component Reification in Systems Modelling
DEFF Research Database (Denmark)
Bendisposto, Jens; Hallerstede, Stefan
When modelling concurrent or distributed systems in Event-B, we often obtain models where the structure of the connected components is specified by constants. Their behaviour is specified by the non-deterministic choice of event parameters for events that operate on shared variables. From a certain......? These components may still refer to shared variables. Events of these components should not refer to the constants specifying the structure. The non-deterministic choice between these components should not be via parameters. We say the components are reified. We need to address how the reified components get...... reflected into the original model. This reflection should indicate the constraints on how to connect the components....
Component Composition Using Feature Models
DEFF Research Database (Denmark)
Eichberg, Michael; Klose, Karl; Mitschke, Ralf
2010-01-01
interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....
The modified Black-Scholes model via constant elasticity of variance for stock options valuation
Edeki, S. O.; Owoloko, E. A.; Ugbebor, O. O.
2016-02-01
In this paper, the classical Black-Scholes option pricing model is visited. We present a modified version of the Black-Scholes model via the application of the constant elasticity of variance model (CEVM); in this case, the volatility of the stock price is shown to be a non-constant function unlike the assumption of the classical Black-Scholes model.
Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling
Directory of Open Access Journals (Sweden)
Petr Novák
2012-03-01
Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the
Option valuation with the simplified component GARCH model
DEFF Research Database (Denmark)
Dziubinski, Matt P.
We introduce the Simplified Component GARCH (SC-GARCH) option pricing model, show and discuss sufficient conditions for non-negativity of the conditional variance, apply it to low-frequency and high-frequency financial data, and consider the option valuation, comparing the model performance...
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.
Asymmetries in conditional mean and variance: Modelling stock returns by asMA-asQGARCH
Brännäs, K.; de Gooijer, J.G.
2000-01-01
The asymmetric moving average model (asMA) is extended to allow for asymmetric quadratic conditional heteroskedasticity (asQGARCH). The asymmetric parametrization of the condi- tional variance encompasses the quadratic GARCH model of Sentana (1995). We introduce a framework for testing asymmetries
DEFF Research Database (Denmark)
Lowes, F.J.; Olsen, Nils
2004-01-01
Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...
Asymmetries in conditional mean variance: modelling stock returns by asMA-asQGARCH
de Gooijer, J.G.; Brännäs, K.
2004-01-01
We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad
Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem
Directory of Open Access Journals (Sweden)
V. Charles
2011-01-01
Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Analysis of Gene Expression Variance in Schizophrenia Using Structural Equation Modeling
Directory of Open Access Journals (Sweden)
Anna A. Igolkina
2018-06-01
Full Text Available Schizophrenia (SCZ is a psychiatric disorder of unknown etiology. There is evidence suggesting that aberrations in neurodevelopment are a significant attribute of schizophrenia pathogenesis and progression. To identify biologically relevant molecular abnormalities affecting neurodevelopment in SCZ we used cultured neural progenitor cells derived from olfactory neuroepithelium (CNON cells. Here, we tested the hypothesis that variance in gene expression differs between individuals from SCZ and control groups. In CNON cells, variance in gene expression was significantly higher in SCZ samples in comparison with control samples. Variance in gene expression was enriched in five molecular pathways: serine biosynthesis, PI3K-Akt, MAPK, neurotrophin and focal adhesion. More than 14% of variance in disease status was explained within the logistic regression model (C-value = 0.70 by predictors accounting for gene expression in 69 genes from these five pathways. Structural equation modeling (SEM was applied to explore how the structure of these five pathways was altered between SCZ patients and controls. Four out of five pathways showed differences in the estimated relationships among genes: between KRAS and NF1, and KRAS and SOS1 in the MAPK pathway; between PSPH and SHMT2 in serine biosynthesis; between AKT3 and TSC2 in the PI3K-Akt signaling pathway; and between CRK and RAPGEF1 in the focal adhesion pathway. Our analysis provides evidence that variance in gene expression is an important characteristic of SCZ, and SEM is a promising method for uncovering altered relationships between specific genes thus suggesting affected gene regulation associated with the disease. We identified altered gene-gene interactions in pathways enriched for genes with increased variance in expression in SCZ. These pathways and loci were previously implicated in SCZ, providing further support for the hypothesis that gene expression variance plays important role in the etiology
Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model
International Nuclear Information System (INIS)
Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.
2012-01-01
This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.
Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence
Cheminet, Adam; Blanquart, Guillaume
2011-11-01
Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik
2014-01-01
This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...
Toward a more robust variance-based global sensitivity analysis of model outputs
Energy Technology Data Exchange (ETDEWEB)
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
Models of Postural Control: Shared Variance in Joint and COM Motions.
Directory of Open Access Journals (Sweden)
Melissa C Kilby
Full Text Available This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM motions was analyzed using multivariate canonical correlation analysis (CCA. The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF, namely, an inverted pendulum ankle model (2DOF, ankle-hip model (4DOF, ankle-knee-hip model (5DOF, and ankle-knee-hip-neck model (7DOF. Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions.
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.
Pressley, Joanna; Troyer, Todd W
2009-07-01
In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.
Mean-variance model for portfolio optimization with background risk based on uncertainty theory
Zhai, Jia; Bai, Manying
2018-04-01
The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.
Wright, George W; Simon, Richard M
2003-12-12
Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf
Energy Technology Data Exchange (ETDEWEB)
Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)
2010-10-15
This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)
International Nuclear Information System (INIS)
Pindoriya, N.M.; Singh, S.N.; Singh, S.K.
2010-01-01
This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)
On estimation of the noise variance in high-dimensional linear models
Golubev, Yuri; Krymova, Ekaterina
2017-01-01
We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).
Anderson, David F; Yuan, Chaojie
2018-04-18
A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.
Forecasting the variance and return of Mexican financial series with symmetric GARCH models
Directory of Open Access Journals (Sweden)
Fátima Irina VILLALBA PADILLA
2013-03-01
Full Text Available The present research shows the application of the generalized autoregresive conditional heteroskedasticity models (GARCH in order to forecast the variance and return of the IPC, the EMBI, the weighted-average government funding rate, the fix exchange rate and the Mexican oil reference, as important tools for investment decisions. Forecasts in-sample and out-of-sample are performed. The covered period involves from 2005 to 2011.
Directory of Open Access Journals (Sweden)
Talerngsak Angkuraseranee
2010-05-01
Full Text Available The additive and dominance genetic variances of 5,801 Duroc reproductive and growth records were estimated usingBULPF90 PC-PACK. Estimates were obtained for number born alive (NBA, birth weight (BW, number weaned (NW, andweaning weight (WW. Data were analyzed using two mixed model equations. The first model included fixed effects andrandom effects identifying inbreeding depression, additive gene effect and permanent environments effects. The secondmodel was similar to the first model, but included the dominance genotypic effect. Heritability estimates of NBA, BW, NWand WW from the two models were 0.1558/0.1716, 0.1616/0.1737, 0.0372/0.0874 and 0.1584/0.1516 respectively. Proportionsof dominance effect to total phenotypic variance from the dominance model were 0.1024, 0.1625, 0.0470, and 0.1536 for NBA,BW, NW and WW respectively. Dominance effects were found to have sizable influence on the litter size traits analyzed.Therefore, genetic evaluation with the dominance model (Model 2 is found more appropriate than the animal model (Model 1.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
A spatial mean-variance MIP model for energy market risk analysis
International Nuclear Information System (INIS)
Yu, Zuwei
2003-01-01
The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets
A spatial mean-variance MIP model for energy market risk analysis
International Nuclear Information System (INIS)
Zuwei Yu
2003-01-01
The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)
A spatial mean-variance MIP model for energy market risk analysis
Energy Technology Data Exchange (ETDEWEB)
Zuwei Yu [Purdue University, West Lafayette, IN (United States). Indiana State Utility Forecasting Group and School of Industrial Engineering
2003-05-01
The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets. (author)
A spatial mean-variance MIP model for energy market risk analysis
Energy Technology Data Exchange (ETDEWEB)
Yu, Zuwei [Indiana State Utility Forecasting Group and School of Industrial Engineering, Purdue University, Room 334, 1293 A.A. Potter, West Lafayette, IN 47907 (United States)
2003-05-01
The paper presents a short-term market risk model based on the Markowitz mean-variance method for spatial electricity markets. The spatial nature is captured using the correlation of geographically separated markets and the consideration of wheeling administration. The model also includes transaction costs and other practical constraints, resulting in a mixed integer programming (MIP) model. The incorporation of those practical constraints makes the model more attractive than the traditional Markowitz portfolio model with continuity. A case study is used to illustrate the practical application of the model. The results show that the MIP portfolio efficient frontier is neither smooth nor concave. The paper also considers the possible extension of the model to other energy markets, including natural gas and oil markets.
GSEVM v.2: MCMC software to analyse genetically structured environmental variance models
DEFF Research Database (Denmark)
Ibáñez-Escriche, N; Garcia, M; Sorensen, D
2010-01-01
This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...
Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models
Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng
2014-03-01
Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).
Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method
Directory of Open Access Journals (Sweden)
Younes Elahi
2014-01-01
Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.
International Nuclear Information System (INIS)
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-01-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Directory of Open Access Journals (Sweden)
Yanfang Lyu
2015-01-01
Full Text Available The presence of outliers can result in seriously biased parameter estimates. In order to detect outliers in panel data models, this paper presents a modeling method to assess the intervention effects based on the variance of remainder disturbance using an arbitrary strictly positive twice continuously differentiable function. This paper also provides a Lagrange Multiplier (LM approach to detect and identify a general type of outlier. Furthermore, fixed effects models and random effects models are discussed to identify outliers and the corresponding LM test statistics are given. The LM test statistics for an individual-based model to detect outliers are given as a particular case. Finally, this paper performs an application using panel data and explains the advantages of the proposed method.
Model determination in a case of heterogeneity of variance using sampling techniques.
Varona, L; Moreno, C; Garcia-Cortes, L A; Altarriba, J
1997-01-12
A sampling determination procedure has been described in a case of heterogeneity of variance. The procedure makes use of the predictive distributions of each data given the rest of the data and the structure of the assumed model. The computation of these predictive distributions is carried out using a Gibbs Sampling procedure. The final criterion to compare between models is the Mean Square Error between the expectation of predictive distributions and real data. The procedure has been applied to a data set of weight at 210 days in the Spanish Pirenaica beef cattle breed. Three proposed models have been compared: (a) Single Trait Animal Model; (b) Heterogeneous Variance Animal Model; and (c) Multiple Trait Animal Model. After applying the procedure, the most adjusted model was the Heterogeneous Variance Animal Model. This result is probably due to a compromise between the complexity of the model and the amount of available information. The estimated heritabilities under the preferred model have been 0.489 ± 0.076 for males and 0.331 ± 0.082 for females. RESUMEN: Contraste de modelos en un caso de heterogeneidad de varianzas usando métodos de muestreo Se ha descrito un método de contraste de modelos mediante técnicas de muestreo en un caso de heterogeneidad de varianza entre sexos. El procedimiento utiliza las distribucviones predictivas de cada dato, dado el resto de datos y la estructura del modelo. El criterio para coparar modelos es el error cuadrático medio entre la esperanza de las distribuciones predictivas y los datos reales. El procedimiento se ha aplicado en datos de peso a los 210 días en la raza bovina Pirenaica. Se han propuesto tres posibles modelos: (a) Modelo Animal Unicaracter; (b) Modelo Animal con Varianzas Heterogéneas; (c) Modelo Animal Multicaracter. El modelo mejor ajustado fue el Modelo Animal con Varianzas Heterogéneas. Este resultado es probablemente debido a un compromiso entre la complejidad del modelo y la cantidad de datos
DIFFERENCES BETWEEN MEAN-VARIANCE AND MEAN-CVAR PORTFOLIO OPTIMIZATION MODELS
Directory of Open Access Journals (Sweden)
Panna Miskolczi
2016-07-01
Full Text Available Everybody heard already that one should not expect high returns without high risk, or one should not expect safety without low returns. The goal of portfolio theory is to find the balance between maximizing the return and minimizing the risk. To do so we have to first understand and measure the risk. Naturally a good risk measure has to satisfy several properties - in theory and in practise. Markowitz suggested to use the variance as a risk measure in portfolio theory. This led to the so called mean-variance model - for which Markowitz received the Nobel Prize in 1990. The model has been criticized because it is well suited for elliptical distributions but it may lead to incorrect conclusions in the case of non-elliptical distributions. Since then many risk measures have been introduced, of which the Value at Risk (VaR is the most widely used in the recent years. Despite of the widespread use of the Value at Risk there are some fundamental problems with it. It does not satisfy the subadditivity property and it ignores the severity of losses in the far tail of the profit-and-loss (P&L distribution. Moreover, its non-convexity makes VaR impossible to use in optimization problems. To come over these issues the Expected Shortfall (ES as a coherent risk measure was developed. Expected Shortfall is also called Conditional Value at Risk (CVaR. Compared to Value at Risk, ES is more sensitive to the tail behaviour of the P&L distribution function. In the first part of the paper I state the definition of these three risk measures. In the second part I deal with my main question: What is happening if we replace the variance with the Expected Shortfall in the portfolio optimization process. Do we have different optimal portfolios as a solution? And thus, does the solution suggests to decide differently in the two cases? To answer to these questions I analyse seven Hungarian stock exchange companies. First I use the mean-variance portfolio optimization model
A Model-Free No-arbitrage Price Bound for Variance Options
Energy Technology Data Exchange (ETDEWEB)
Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr [Ecole Polytechnique, INRIA-Saclay (France); Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu [Ecole Polytechnique, CMAP (France)
2013-08-01
We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.
A new media optimizer based on the mean-variance model
Directory of Open Access Journals (Sweden)
Pedro Jesus Fernandez
2007-01-01
Full Text Available In the financial markets, there is a well established portfolio optimization model called generalized mean-variance model (or generalized Markowitz model. This model considers that a typical investor, while expecting returns to be high, also expects returns to be as certain as possible. In this paper we introduce a new media optimization system based on the mean-variance model, a novel approach in media planning. After presenting the model in its full generality, we discuss possible advantages of the mean-variance paradigm, such as its flexibility in modeling the optimization problem, its ability of dealing with many media performance indices - satisfying most of the media plan needs - and, most important, the property of diversifying the media portfolios in a natural way, without the need to set up ad hoc constraints to enforce diversification.No mercado financeiro, existem modelos de otimização de portfólios já bem estabelecidos, denominados modelos de média-variância generalizados, ou modelos de Markowitz generalizados. Este modelo considera que um investidor típico, enquanto espera altos retornos, espera também que estes retornos sejam tão certos quanto possível. Neste artigo introduzimos um novo sistema otimizador de mídia baseado no modelo de média-variância, uma abordagem inovadora na área de planejamento de mídia. Após apresentar o modelo em sua máxima generalidade, discutimos possíveis vantagens do paradigma de média-variância, como sua flexibilidade na modelagem do problema de otimização, sua habilidade de lidar com vários índices de performance - satisfazendo a maioria dos requisitos de planejamento - e, o mais importante, a propriedade de diversificar os portfólios de mídia de uma forma natural, sem a necessidade de estipular restrições ad hoc para forçar a diversificação.
Directory of Open Access Journals (Sweden)
Basuki Basuki
2017-07-01
Full Text Available Dalam paper ini, model optimisasi portofolio investasi Mean-Variance tanpa aset bebas risiko, atau disebut model dasar dari Markowitz telah dikaji untuk mendapatkan portofolio optimum.Berdasarkan model dasar dari Markowitz, kemudian dilakukan studi lebih lanjut pada model Mean-Variance dengan aset bebas risiko. Selanjutnya, kedua model tersebut digunakan untuk menganalisis optimisasi portofolio investasi pada beberapa saham IDX30. Dalam paper ini diasumsikan bahwa proporsi sebesar 10% diinvestasikan pada aset bebas risiko, berupa deposito yang memberikan return sebesar 7% per tahun. Berdasarkan hasil analisis optimisasi portofolio investasi pada lima saham yang dipilih didapatkan grafik permukaan efisien dari optimisasi portofolio Mean-Variance dengan aset bebas risiko, berada lebih tinggi dibandingkan optimisasi portofolio Mean-Variance tanpa aset bebas risiko. Dalam hal ini menunjukkan bahwa portofolio investasi kombinasi dari aset bebas risiko dan aset tanpa bebas risiko, lebih menguntungkan dibandingkan portofolio investasi yang hanya pada aset tanpa bebas risiko.
Regional income inequality model based on theil index decomposition and weighted variance coeficient
Sitepu, H. R.; Darnius, O.; Tambunan, W. N.
2018-03-01
Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.
A flexible model for the mean and variance functions, with application to medical cost data.
Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T
2013-10-30
Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Baek, Eun Kyeng; Ferron, John M
2013-03-01
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.
Caccamo, M.; Veerkamp, R.F.; Jong, de G.; Pool, M.H.; Petriglieri, R.; Licitra, G.
2008-01-01
Test-day (TD) models are used in most countries to perform national genetic evaluations for dairy cattle. The TD models estimate lactation curves and their changes as well as variation in populations. Although potentially useful, little attention has been given to the application of TD models for
Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.
Directory of Open Access Journals (Sweden)
Jan Jurczyk
Full Text Available The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.
Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.
Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo
2016-01-01
The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.
A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik
2014-01-01
, the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...
Stochastic Modeling Of Wind Turbine Drivetrain Components
DEFF Research Database (Denmark)
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard
2014-01-01
reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...
Modelling Livestock Component in FSSIM
Thorne, P.J.; Hengsdijk, H.; Janssen, S.J.C.; Louhichi, K.; Keulen, van H.; Thornton, P.K.
2009-01-01
This document summarises the development of a ruminant livestock component for the Farm System Simulator (FSSIM). This includes treatments of energy and protein transactions in ruminant livestock that have been used as a basis for the biophysical simulations that will generate the input production
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
Energy Technology Data Exchange (ETDEWEB)
Blakeman, Edward D [ORNL; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Murphy, Brian D [ORNL; Mueller, Don [ORNL
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
International Nuclear Information System (INIS)
Blakeman, Edward D.; Peplow, Douglas E.; Wagner, John C.; Murphy, Brian D.; Mueller, Don
2007-01-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts
Modeling the degradation of nuclear components
International Nuclear Information System (INIS)
Stock, D.; Samanta, P.; Vesely, W.
1993-01-01
This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed
Model reduction by weighted Component Cost Analysis
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Variance decomposition of protein profiles from antibody arrays using a longitudinal twin model
Directory of Open Access Journals (Sweden)
Kato Bernet S
2011-11-01
Full Text Available Abstract Background The advent of affinity-based proteomics technologies for global protein profiling provides the prospect of finding new molecular biomarkers for common, multifactorial disorders. The molecular phenotypes obtained from studies on such platforms are driven by multiple sources, including genetic, environmental, and experimental components. In characterizing the contribution of different sources of variation to the measured phenotypes, the aim is to facilitate the design and interpretation of future biomedical studies employing exploratory and multiplexed technologies. Thus, biometrical genetic modelling of twin or other family data can be used to decompose the variation underlying a phenotype into biological and experimental components. Results Using antibody suspension bead arrays and antibodies from the Human Protein Atlas, we study unfractionated serum from a longitudinal study on 154 twins. In this study, we provide a detailed description of how the variation in a molecular phenotype in terms of protein profile can be decomposed into familial i.e. genetic and common environmental; individual environmental, short-term biological and experimental components. The results show that across 69 antibodies analyzed in the study, the median proportion of the total variation explained by familial sources is 12% (IQR 1-22%, and the median proportion of the total variation attributable to experimental sources is 63% (IQR 53-72%. Conclusion The variability analysis of antibody arrays highlights the importance to consider variability components and their relative contributions when designing and evaluating studies for biomarker discoveries with exploratory, high-throughput and multiplexed methods.
Modelization of cooling system components
Energy Technology Data Exchange (ETDEWEB)
Copete, Monica; Ortega, Silvia; Vaquero, Jose Carlos; Cervantes, Eva [Westinghouse Electric (Spain)
2010-07-01
In the site evaluation study for licensing a new nuclear power facility, the criteria involved could be grouped in health and safety, environment, socio-economics, engineering and cost-related. These encompass different aspects such as geology, seismology, cooling system requirements, weather conditions, flooding, population, and so on. The selection of the cooling system is function of different parameters as the gross electrical output, energy consumption, available area for cooling system components, environmental conditions, water consumption, and others. Moreover, in recent years, extreme environmental conditions have been experienced and stringent water availability limits have affected water use permits. Therefore, modifications or alternatives of current cooling system designs and operation are required as well as analyses of the different possibilities of cooling systems to optimize energy production taking into account water consumption among other important variables. There are two basic cooling system configurations: - Once-through or Open-cycle; - Recirculating or Closed-cycle. In a once-through cooling system (or open-cycle), water from an external water sources passes through the steam cycle condenser and is then returned to the source at a higher temperature with some level of contaminants. To minimize the thermal impact to the water source, a cooling tower may be added in a once-through system to allow air cooling of the water (with associated losses on site due to evaporation) prior to returning the water to its source. This system has a high thermal efficiency, and its operating and capital costs are very low. So, from an economical point of view, the open-cycle is preferred to closed-cycle system, especially if there are no water limitations or environmental restrictions. In a recirculating system (or closed-cycle), cooling water exits the condenser, goes through a fixed heat sink, and is then returned to the condenser. This configuration
DEFF Research Database (Denmark)
Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes
2017-01-01
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...
A Random Parameter Model for Continuous-Time Mean-Variance Asset-Liability Management
Directory of Open Access Journals (Sweden)
Hui-qiang Ma
2015-01-01
Full Text Available We consider a continuous-time mean-variance asset-liability management problem in a market with random market parameters; that is, interest rate, appreciation rates, and volatility rates are considered to be stochastic processes. By using the theories of stochastic linear-quadratic (LQ optimal control and backward stochastic differential equations (BSDEs, we tackle this problem and derive optimal investment strategies as well as the mean-variance efficient frontier analytically in terms of the solution of BSDEs. We find that the efficient frontier is still a parabola in a market with random parameters. Comparing with the existing results, we also find that the liability does not affect the feasibility of the mean-variance portfolio selection problem. However, in an incomplete market with random parameters, the liability can not be fully hedged.
Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models
DEFF Research Database (Denmark)
Lanne, Markku; Nyberg, Henri
We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...
Use of hypotheses for analysis of variance Models: Challenging the current practice
van Wesel, F.; Boeije, H.R.; Hoijtink, H
2013-01-01
In social science research, hypotheses about group means are commonly tested using analysis of variance. While deemed to be formulated as specifically as possible to test social science theory, they are often defined in general terms. In this article we use two studies to explore the current
Motsepa, Tanki; Aziz, Taha; Fatima, Aeeman; Khalique, Chaudry Masood
2018-03-01
The optimal investment-consumption problem under the constant elasticity of variance (CEV) model is investigated from the perspective of Lie group analysis. The Lie symmetry group of the evolution partial differential equation describing the CEV model is derived. The Lie point symmetries are then used to obtain an exact solution of the governing model satisfying a standard terminal condition. Finally, we construct conservation laws of the underlying equation using the general theorem on conservation laws.
Tweaking the Four-Component Model
Curzer, Howard J.
2014-01-01
By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…
Sparse principal component analysis in medical shape modeling
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
The use of testday models in the estimation of variance components ...
African Journals Online (AJOL)
Bernice Mostert
Breeding value estimation for somatic cell score in South African dairy cattle ... It causes severe ... traits, occurrence of mastitis is not routinely recorded in most dairy recording .... Genetic parameters for clinical mastitis, somatic cell counts.
Pump Component Model in SPACE Code
International Nuclear Information System (INIS)
Kim, Byoung Jae; Kim, Kyoung Doo
2010-08-01
This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report
Overview of the model component in ECOCLIM
DEFF Research Database (Denmark)
Geels, Camilla; Boegh, Eva; Bendtsen, J
and atmospheric models. We will use the model system to 1) quantify the potential effects of climate change on ecosystem exchange of GHG and 2) estimate the impacts of changes in management practices including land use change and nitrogen (N) loads. Here the various model components will be introduced...
Variance of the number of tumors in a model for the induction of osteosarcoma by alpha radiation
International Nuclear Information System (INIS)
Groer, P.G.; Marshall, J.H.
1976-01-01
An earlier report on a model for the induction of osteosarcoma by alpha radiation gave differential equations for the mean numbers of normal, transformed, and malignant cells. In this report we show that for a constant dose rate the variance of the number of cells at each stage and time is equal to the corresponding mean, so the numbers of tumors predicted by the model have a Poisson distribution about their mean values
DEFF Research Database (Denmark)
Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg
2010-01-01
Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... records exist for the parents). Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to the sire-dam model). Conclusions The new algorithm to estimate genetic parameters via Gibbs sampling solves the bias problems typically occurring in animal...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...
DEFF Research Database (Denmark)
Villumsen, Trine Michelle; Su, Guosheng; Cai, Zexi
2018-01-01
by sequencing. Four live grading traits and four traits on dried pelts for size and quality were analysed. GWAS analysis detected significant SNPs for all the traits. The single-trait Bayesian model resulted in higher accuracies for the genomic predictions than the single-trait GBLUP model, especially......The accuracy of genomic prediction for mink was compared for single-trait and multiple-trait GBLUP models and Bayesian models that allowed for heterogeneous (co)variance structure over the genome. The mink population consisted of 2,103 brown minks genotyped with the method of genotyping...... for the traits measured on dried pelts. We expected the multiple-trait models to be superior to the single trait models since the multiple-trait model can make use of information when traits are correlated. However, we did not find a general improvement in accuracies with the multiple-trait models compared...
International Nuclear Information System (INIS)
Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen
2013-01-01
We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks
Probabilistic Modeling of Wind Turbine Drivetrain Components
DEFF Research Database (Denmark)
Rafsanjani, Hesam Mirzaei
Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...
Modeling accelerator structures and RF components
International Nuclear Information System (INIS)
Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.
1993-03-01
Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included
International Nuclear Information System (INIS)
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.
2011-01-01
Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic
Modelling safety of multistate systems with ageing components
Energy Technology Data Exchange (ETDEWEB)
Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna [Gdynia Maritime University, Department of Mathematics ul. Morska 81-87, Gdynia 81-225 Poland (Poland)
2016-06-08
An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.
Modelling safety of multistate systems with ageing components
International Nuclear Information System (INIS)
Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna
2016-01-01
An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests
Penerapan Model Multivariat Analisis of Variance dalam Mengukur Persepsi Destinasi Wisata
Directory of Open Access Journals (Sweden)
Robert Tang Herman
2012-05-01
Full Text Available The purpose of this research is to provide conceptual and infrastructure tools for Dinas Pariwisata DKI Jakarta to improve their capabilities for evaluating business performance based on market responsiveness. Capturing market responsiveness is the initial research to make industry mapping. Research steps started with secondary research to build data classification system. The second is primary research by collecting the data from market research. Data sources for secondary data were collected from Dinas Pariwisata DKI, while the primary data were collected from survey method using quetionaires addressed to the whole market. Then, analyze the data colleted with multivariate analysis of variance to develop the mapping. The result of cluster analysis distinguishes the potential market based on their responses to the industry classification, make the classification system, find the gaps and how important are they, and the another issue related to the role of the mapping system. So, this mapping system will help Dinas Pariwisata DKI to improve capabilities and the business performance based on the market responsiveness and, which is the potential market for each specific classification, know what their needs, wants and demand from that classification. This research contribution can be used to give the recommendation to Dinas Pariwisata DKI to deliver what market needs and wants to all the tourism place based on this classification resulting, to develop the market growth estimation; and for the long term is to improve the economic and market growth.
Thermochemical modelling of multi-component systems
International Nuclear Information System (INIS)
Sundman, B.; Gueneau, C.
2015-01-01
Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)
Independent Component Analysis in Multimedia Modeling
DEFF Research Database (Denmark)
Larsen, Jan
2003-01-01
largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...
PCA: Principal Component Analysis for spectra modeling
Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas
2012-07-01
The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.
Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research
Li, Bayoue
2014-01-01
markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we will describe the fundamental ideas about mixed effects models and factor analytic (FA) models. To be specific, this chapter covers several types of these two classes of modeling approaches. For the mixed ...
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
International Nuclear Information System (INIS)
Wada, Kenichi; Sano, Fuminori; Oshima, Kanji; Akimoto, Keigo
2013-01-01
Nuclear power secures affordable carbon-free energy supply, but entails various risks and constraints, such as safety concerns, waste disposal protest campaign, and proliferation. Given the nature of these characteristics of nuclear power generation, there is wide range of variations in representation of nuclear power technologies across models. In this paper, we explore the variance of the model representation of nuclear power generation and its implication to the climate change mitigation assessment, based on the EMF27 study. The most common result is that under efforts to mitigate climate change more nuclear energy use is needed. We find, however, that perspectives on the contribution of nuclear energy to global energy needs vary tremendously among the modeling teams. This diversity mainly comes from the difference in the level of detail that characterize nuclear energy technologies and the broad range of nuclear contributions in the long-term scenarios of global energy use. (author)
Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-04-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.
Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models
International Nuclear Information System (INIS)
Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan
2006-01-01
The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data
Fitted HBT radii versus space-time variances in flow-dominated models
International Nuclear Information System (INIS)
Lisa, Mike; Frodermann, Evan; Heinz, Ulrich
2007-01-01
The inability of otherwise successful dynamical models to reproduce the 'HBT radii' extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the 'RHIC HBT Puzzle'. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source which can be directly computed from the emission function, without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models some of which exhibit significant deviations from simple Gaussian behaviour. By Fourier transforming the emission function we compute the 2-particle correlation function and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and measured HBT radii remain, we show that a more 'apples-to-apples' comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data. (author)
Pool scrubbing models for iodine components
Energy Technology Data Exchange (ETDEWEB)
Fischer, K [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)
1996-12-01
Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.
Pool scrubbing models for iodine components
International Nuclear Information System (INIS)
Fischer, K.
1996-01-01
Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs
Prediction model for the diurnal behavior of the tropospheric scintillation variance
Tervonen, J.K.; Kamp, van de M.M.J.L.; Salonen, E.T.
1998-01-01
Tropospheric scintillation is caused by variations of the refractive index due to turbulence. The only meteorological input parameter for two common current scintillation models by Karasawa et al. (1988) and by the ITU-R is the monthly average of the wet part of the refractivity Nwet at ground
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research
B. Li (Bayoue)
2014-01-01
markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we
Bridging design and behavioral research with variance-based structural equation modeling
Henseler, Jörg
2017-01-01
Advertising research is a scientific discipline that studies artifacts (e.g., various forms of marketing communication) as well as natural phenomena (e.g., consumer behavior). Empirical advertising research therefore requires methods that can model design constructs as well as behavioral constructs,
Computational needs for modelling accelerator components
International Nuclear Information System (INIS)
Hanerfeld, H.
1985-06-01
The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs
Variance-based sensitivity indices for stochastic models with correlated inputs
Energy Technology Data Exchange (ETDEWEB)
Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)
2015-03-10
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.
Variance-based sensitivity indices for stochastic models with correlated inputs
International Nuclear Information System (INIS)
Kala, Zdeněk
2015-01-01
The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics
Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data
DEFF Research Database (Denmark)
Greve, Douglas N; Svarer, Claus; Fisher, Patrick M
2014-01-01
Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise...... in a ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface...
Mean – Variance parametric Model for the Classification based on Cries of Babies
Khalid Nazim S. A; Dr. M.B Sanjay Pande
2010-01-01
Cry is a feature which makes a individual to take certain care about the infant which has initiated it. It is also equally understood that cry makes a person to take certain steps. In the present work, we have tried to implement a mathematical model which can classify the cry into its cluster or group based on certain parameters based on which a cry is classified into a normal or abnormal. To corroborate the methodology we taken 17 distinguished features of cry. The implemented mathematical m...
2015-03-26
response. Additionally, choosing correlated levels for multiple factors results in multicollinearity which can cause problems such as model...misspecification or large variances and covariances for the regression coefficients. A good way to avoid multicollinearity is to use orthogonal, factorial
Integration of Simulink Models with Component-based Software Models
DEFF Research Database (Denmark)
Marian, Nicolae
2008-01-01
Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...
Directory of Open Access Journals (Sweden)
Rodrigo Reis Mota
2016-09-01
Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with
Confidence Interval Approximation For Treatment Variance In ...
African Journals Online (AJOL)
In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...
Energy Technology Data Exchange (ETDEWEB)
Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.
2014-06-01
The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)
Integration of Simulink Models with Component-based Software Models
Directory of Open Access Journals (Sweden)
MARIAN, N.
2008-06-01
Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun
2017-01-01
Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864
Accurate modeling of UV written waveguide components
DEFF Research Database (Denmark)
Svalgaard, Mikael
BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....
Accurate modelling of UV written waveguide components
DEFF Research Database (Denmark)
Svalgaard, Mikael
BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
Bright, Molly G.; Murphy, Kevin
2015-01-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...
Davies, Patrick Laurie
2014-01-01
Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...
Generalized structured component analysis a component-based approach to structural equation modeling
Hwang, Heungsun
2014-01-01
Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...
MCNP variance reduction overview
International Nuclear Information System (INIS)
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code
Estimation of measurement variances
International Nuclear Information System (INIS)
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Peralta, Yadira; Moreno, Mario; Harwell, Michael; Guzey, S. Selcen; Moore, Tamara J.
2018-01-01
Variance heterogeneity is a common feature of educational data when treatment differences expressed through means are present, and often reflects a treatment by subject interaction with respect to an outcome variable. Identifying variables that account for this interaction can enhance understanding of whom a treatment does and does not benefit in…
Tritium permeation model for plasma facing components
Longhurst, G. R.
1992-12-01
This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.
Tritium permeation model for plasma facing components
International Nuclear Information System (INIS)
Longhurst, G.R.
1992-12-01
This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included
Nitrogen component in nonpoint source pollution models
Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...
Heterogeneity of variance and its implications on dairy cattle breeding
African Journals Online (AJOL)
Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...
Directory of Open Access Journals (Sweden)
Yun Shi
2014-01-01
Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....
Model-integrating software components engineering flexible software systems
Derakhshanmanesh, Mahdi
2015-01-01
In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be
Modeling money demand components in Lebanon using autoregressive models
International Nuclear Information System (INIS)
Mourad, M.
2008-01-01
This paper analyses monetary aggregate in Lebanon and its different component methodology of AR model. Thirteen variables in monthly data have been studied for the period January 1990 through December 2005. Using the Augmented Dickey-Fuller (ADF) procedure, twelve variables are integrated at order 1, thus they need the filter (1-B)) to become stationary, however the variable X 1 3,t (claims on private sector) becomes stationary with the filter (1-B)(1-B 1 2) . The ex-post forecasts have been calculated for twelve horizons and for one horizon (one-step ahead forecast). The quality of forecasts has been measured using the MAPE criterion for which the forecasts are good because the MAPE values are lower. Finally, a pursuit of this research using the cointegration approach is proposed. (author)
Integration of Simulink Models with Component-based Software Models
DEFF Research Database (Denmark)
Marian, Nicolae; Top, Søren
2008-01-01
, communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...
Public health component in building information modeling
Trufanov, A. I.; Rossodivita, A.; Tikhomirov, A. A.; Berestneva, O. G.; Marukhina, O. V.
2018-05-01
A building information modelling (BIM) conception has established itself as an effective and practical approach to plan, design, construct, and manage buildings and infrastructure. Analysis of the governance literature has shown that the BIM-developed tools do not take fully into account the growing demands from ecology and health fields. In this connection, it is possible to offer an optimal way of adapting such tools to the necessary consideration of the sanitary and hygienic specifications of materials used in construction industry. It is proposed to do it through the introduction of assessments that meet the requirements of national sanitary standards. This approach was demonstrated in the case study of Revit® program.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M
2014-01-20
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.
International Nuclear Information System (INIS)
Budiarto, E; Keijzer, M; Heemink, A W; Storchi, P R M; Breedveld, S; Heijmen, B J M
2014-01-01
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements. (paper)
Algorithmic fault tree construction by component-based system modeling
International Nuclear Information System (INIS)
Majdara, Aref; Wakabayashi, Toshio
2008-01-01
Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)
Efficient transfer of sensitivity information in multi-component models
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Rabiti, Cristian
2011-01-01
In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)
Component based modelling of piezoelectric ultrasonic actuators for machining applications
International Nuclear Information System (INIS)
Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V
2013-01-01
Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance
Components in models of learning: Different operationalisations and relations between components
Directory of Open Access Journals (Sweden)
Mirkov Snežana
2013-01-01
Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu
Revision: Variance Inflation in Regression
Directory of Open Access Journals (Sweden)
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Models for integrated components coupled with their EM environment
Ioan, D.; Schilders, W.H.A.; Ciuprina, G.; Meijs, van der N.P.; Schoenmaker, W.
2008-01-01
Abstract: Purpose – The main aim of this study is the modelling of the interaction of on-chip components with their electromagnetic environment. Design/methodology/approach – The integrated circuit is decomposed in passive and active components interconnected by means of terminals and connectors
Feature-based component model for design of embedded systems
Zha, Xuan Fang; Sriram, Ram D.
2004-11-01
An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Robustness of Component Models in Energy System Simulators
DEFF Research Database (Denmark)
Elmegaard, Brian
2003-01-01
During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI......). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...
Integrating environmental component models. Development of a software framework
Schmitz, O.
2014-01-01
Integrated models consist of interacting component models that represent various natural and social systems. They are important tools to improve our understanding of environmental systems, to evaluate cause–effect relationships of human–natural interactions, and to forecast the behaviour of
Directory of Open Access Journals (Sweden)
Elmer Francisco Valencia Tapia
2011-06-01
Full Text Available Avaliou-se a heterogeneidade dos componentes de variância e seu efeito nas estimativas de herdabilidade e repetibilidade da produção de leite de bovinos da raça Holandesa. Os rebanhos foram agrupados de acordo com o nível de produção (baixo, médio e alto e avaliados na escala não transformada, raiz quadrada e logarítmica. Os componentes de variância foram estimados pelo método de máxima verossimilhança restrita. O modelo animal incluiu os efeitos fixos de rebanho-ano-estação e das covariáveis duração da lactação (efeito linear e idade da vaca ao parto (efeito linear e quadrático e os efeitos aleatórios genético aditivo direto, de ambiente permanente e residual. Na escala não transformada, todos os componentes de variância foram heterogêneos entre os três níveis de produção. Nesta escala, a variância residual e a fenotípica estavam associadas positivamente com o nível de produção enquanto que na escala logarítmica a associação foi negativa. A heterogeneidade da variância fenotípica e de seus componentes afetou mais as estimativas de herdabilidade que as da repetibilidade. A eficiência do processo de seleção para produção de leite poderá ser afetada pelo nível de produção em que forem estimados os parâmetros genéticos.It was evaluated the heterogeneity of components of phenotypic variance and its effects on the heritability and repeatability estimates for milk yield in Holstein cattle. The herds were grouped according to their level of production (low, medium and high and evaluated in the non-transformed, square-root and logarithmic scale. Variance components were estimated using a restricted maximum likelihood method based on an animal model that included fixed effects of herd-year-season, and as covariates the linear effect of lactation duration and the linear and quadratic effects of cow's age at calving and the random direct additive genetic, permanent environment and residual effects. In the
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Efficient Cardinality/Mean-Variance Portfolios
Brito, R. Pedro; Vicente, Luís Nunes
2014-01-01
International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...
Ferrer, Rebecca A; Klein, William M P; Persoskie, Alexander; Avishai-Yitshak, Aya; Sheeran, Paschal
2016-10-01
Although risk perception is a key predictor in health behavior theories, current conceptions of risk comprise only one (deliberative) or two (deliberative vs. affective/experiential) dimensions. This research tested a tripartite model that distinguishes among deliberative, affective, and experiential components of risk perception. In two studies, and in relation to three common diseases (cancer, heart disease, diabetes), we used confirmatory factor analyses to examine the factor structure of the tripartite risk perception (TRIRISK) model and compared the fit of the TRIRISK model to dual-factor and single-factor models. In a third study, we assessed concurrent validity by examining the impact of cancer diagnosis on (a) levels of deliberative, affective, and experiential risk perception, and (b) the strength of relations among risk components, and tested predictive validity by assessing relations with behavioral intentions to prevent cancer. The tripartite factor structure was supported, producing better model fit across diseases (studies 1 and 2). Inter-correlations among the components were significantly smaller among participants who had been diagnosed with cancer, suggesting that affected populations make finer-grained distinctions among risk perceptions (study 3). Moreover, all three risk perception components predicted unique variance in intentions to engage in preventive behavior (study 3). The TRIRISK model offers both a novel conceptualization of health-related risk perceptions, and new measures that enhance predictive validity beyond that engendered by unidimensional and bidimensional models. The present findings have implications for the ways in which risk perceptions are targeted in health behavior change interventions, health communications, and decision aids.
Restricted Variance Interaction Effects
DEFF Research Database (Denmark)
Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.
2018-01-01
Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...
Component and system simulation models for High Flux Isotope Reactor
International Nuclear Information System (INIS)
Sozer, A.
1989-08-01
Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs
A probabilistic model for component-based shape synthesis
Kalogerakis, Evangelos
2012-07-01
We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.
Towards a Component Based Model for Database Systems
Directory of Open Access Journals (Sweden)
Octavian Paul ROTARU
2004-02-01
Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT
Directory of Open Access Journals (Sweden)
Fatemeh PooraghaRoodbarde
2017-04-01
Full Text Available Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test.Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation.Conclusions: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.
Ottley, Jennifer Riggie; Ferron, John M.; Hanline, Mary Frances
2016-01-01
The purpose of this study was to explain the variability in data collected from a single-case design study and to identify predictors of communicative outcomes for children with developmental delays or disabilities (n = 4). Using SAS® University Edition, we fit multilevel models with time nested within children. Children's level of baseline…
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.Th; Verburg, T.G.
2001-01-01
The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)
Modeling fabrication of nuclear components: An integrative approach
Energy Technology Data Exchange (ETDEWEB)
Hench, K.W.
1996-08-01
Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.
Modeling cellular networks in fading environments with dominant specular components
Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim
2016-01-01
to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks
Modeling the evaporation of sessile multi-component droplets
Diddens, C.; Kuerten, Johannes G.M.; van der Geld, C.W.M.; Wijshoff, H.M.A.
2017-01-01
We extended a mathematical model for the drying of sessile droplets, based on the lubrication approximation, to binary mixture droplets. This extension is relevant for e.g. inkjet printing applications, where ink consisting of several components are used. The extension involves the generalization of
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Hybrid time/frequency domain modeling of nonlinear components
DEFF Research Database (Denmark)
Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth
2007-01-01
This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...
Local variances in biomonitoring
International Nuclear Information System (INIS)
Wolterbeek, H.T.
1999-01-01
The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)
Directory of Open Access Journals (Sweden)
Imane Bebba
2017-08-01
Full Text Available This study aimed to measure and compare the performance of forty-seven Algerian universities, using models of returns to Scale approach, which is based primarily on the Data Envelopment Analysis method. In order to achieve the objective of the study, a set of variables was chosen to represent the dimension of teaching. The variables consisted of three input variables, which were: the total number of students in the undergraduate level, students in the post graduate level and the number of permanent professors. On the other hand, the output variable was represented by the total number of students holding degrees of the two levels. Four basic models for data envelopment analysis method were applied. These were: (Scale Returns, represented by input-oriented and output-oriented constant returns and input-oriented and output-oriented variable returns. After the analysis of data, results revealed that eight universities achieved full efficiency according to constant returns to scale in both input and output orientations. Seventeen universities achieved full efficiency according to the model of input-oriented returns to scale variable. Sixteen universities achieved full efficiency according to the model of output-oriented returns to scale variable. Therefore, during the performance measurement, the size of the university, competition, financial and infrastructure constraints, and the process of resource allocation within the university should be taken into consideration. Also, multiple input and output variables reflecting the dimensions of teaching, research, and community service should be included while measuring and assessing the performance of Algerian universities, rather than using two variables which do not reflect the actual performance of these universities. Keywords: Performance of Algerian Universities, Data envelopment analysis method , Constant returns to scale, Variable returns to scale, Input-orientation, Output-orientation.
Smart, Joan E Hunter; Cumming, Sean P; Sherar, Lauren B; Standage, Martyn; Neville, Helen; Malina, Robert M
2012-01-01
This study tested a mediated effects model of psychological and behavioral adaptation to puberty within the context of physical activity (PA). Biological maturity status, physical self-concept, PA, and health-related quality of life (HRQoL) were assessed in 222 female British year 7 to 9 pupils (mean age = 12.7 years, SD = .8). Structural equation modeling using maximum likelihood estimation and bootstrapping procedures supported the hypothesized model. Maturation status was inversely related to perceptions of sport competence, body attractiveness, and physical condition; and indirectly and inversely related to physical self-worth, PA, and HRQoL. Examination of the bootstrap-generated bias-corrected confidence intervals representing the direct and indirect paths between suggested that physical self-concept partially mediated the relations between maturity status and PA, and maturity status and HRQoL. Evidence supports the contention that perceptions of the physical self partially mediate relations maturity, PA, and HRQoL in adolescent females.
Siddiquee, Abu Nayem Md. Asraf
A parametric modeling study has been carried out to assess the impact of change in operating parameters on the performance of Vanadium Redox Flow Battery (VRFB). The objective of this research is to develop a computer program to predict the dynamic behavior of VRFB combining fluid mechanics, reaction kinetics, and electric circuit. The computer program was developed using Maple 2015 and calculations were made at different operating parameters. Modeling results show that the discharging time increases from 2.2 hours to 6.7 hours when the concentration of V2+ in electrolytes increases from 1M to 3M. The operation time during the charging cycle decreases from 6.9 hours to 3.3 hours with the increase of applied current from 1.85A to 3.85A. The modeling results represent that the charging and discharging time were found to increase from 4.5 hours to 8.2 hours with the increase in tank to cell ratio from 5:1 to 10:1.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Energy Technology Data Exchange (ETDEWEB)
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Data and information needs for WPP testing and component modeling
International Nuclear Information System (INIS)
Kuhn, W.L.
1987-01-01
The modeling task of the Waste Package Program (WPP) is to develop conceptual models that describe the interactions of waste package components with their environment and the interactions among waste package components. The task includes development and maintenance of a database of experimental data, and statistical analyses to fit model coefficients, test the significance of the fits, and propose experimental designs. The modeling task collaborates with experimentalists to apply physicochemical principles to develop the conceptual models, with emphasis on the subsequent mathematical development. The reason for including the modeling task in the predominantly experimental WPP is to keep the modeling of component behavior closely associated with the experimentation. Whenever possible, waste package degradation processes are described in terms of chemical reactions or transport processes. The integration of equations for assumed or calculated repository conditions predicts variations with time in the repository. Within the context of the waste package program, the composition and rate of arrival of brine to the waste package are environmental variables. These define the environment to be simulated or explored during waste package component and interactions testing. The containment period is characterized by rapid changes in temperature, pressure, oxygen fugacity, and salt porosity. Brine migration is expected to be most rapid during this period. The release period is characterized by modest and slowly changing temperatures, high pressure, low oxygen fugacity, and low porosity. The need is to define the scenario within which waste package degradation calculations are to be made and to quantify the rate of arrival and composition of the brine. Appendix contains 4 vugraphs
Sparse Principal Component Analysis in Medical Shape Modeling
DEFF Research Database (Denmark)
Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus
2006-01-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...
Modeling cellular networks in fading environments with dominant specular components
AlAmmouri, Ahmad
2016-07-26
Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.
Spectral Ambiguity of Allan Variance
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Cognitive components underpinning the development of model-based learning.
Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A
2017-06-01
Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Directory of Open Access Journals (Sweden)
Laeli Nurani
2016-04-01
Full Text Available Penelitian ini membahas analisis portofolio syariah optimum menggunakan model Mean-Variannce Efficient Portofolio (MVEP yang proses pemilihan sahamnya menggunakan Data Envelopment Analysis (DEA dengan menggunakan kendala input (Standar Deviasi, Debt Earning Ratio, Book Value Share, Price Book Value Ratio dan kendala output (Return, Earning Per Share, Return On Equity, Return On Asset, Net Profit Margin, Price Earning Ratio. Data yang digunakan dalam Tugas Akhir ini adalah saham-saham yang terdaftar di Jakarta Islamic Index (JII periode 27 Juni 2014 – 18 Februari 2016. Hasil uji efisiensi dengan DEA-CCR dan DEA-BCC diperoleh 14 saham terpilih sebagai kandidat pembentuk portofolio, yaitu: ADRO, ASRI, BSDE INDF, INTP, ITMG, KLBF, LPKR, LSIP, PGAS, SMGR, SMRA, TLKM, dan UNVR. Dari ke-14 saham tersebut diperoleh 4 saham optimal dengan besar dana yang harus diinvestasikan pada masing-masing saham yaitu: TLKM (52%, UNVR (7%, LPKR (17% dan INDF (24% dengan ekspektasi keuntungan sebesar 0,000646 (0,06% resiko sebesar 0,01389 (1,4%.
Models for describing the thermal characteristics of building components
DEFF Research Database (Denmark)
Jimenez, M.J.; Madsen, Henrik
2008-01-01
, for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...
Formal Model-Driven Engineering: Generating Data and Behavioural Components
Directory of Open Access Journals (Sweden)
Chen-Wei Wang
2012-12-01
Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.
DEFF Research Database (Denmark)
Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just
2002-01-01
In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
A minimal model for two-component dark matter
International Nuclear Information System (INIS)
Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.
2014-01-01
We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.
Evaluation of the RELAP5/MOD3 multidimensional component model
International Nuclear Information System (INIS)
Tomlinson, E.T.; Rens, T.E.; Coffield, R.D.
1994-01-01
Accurate plenum predictions, which are directly related to the mixing models used, are an important plant modeling consideration because of the consequential impact on basic transient performance calculations for the integrated system. The effect of plenum is a time shift between inlet and outlet temperature changes to the particular volume. Perfect mixing, where the total volume interacts instantaneously with the total inlet flow, does not occur because of effects such as inlet/outlet nozzle jetting, flow stratification, nested vortices within the volume and the general three-dimensional velocity distribution of the flow field. The time lag which exists between the inlet and outlet flows impacts the predicted rate of temperature change experienced by various plant system components and this impacts local component analyses which are affected by the rate of temperature change. This study includes a comparison of two-dimensional plenum mixing predictions using CFD-FLOW3D, RELAP5/MOD3 and perfect mixing models. Three different geometries (flat, square and tall) are assessed for scalar transport times using a wide range of inlet velocity and isothermal conditions. In addition, the three geometries were evaluated for low flow conditions with the inlet flow experiencing a large step temperature decrease. A major conclusion from this study is that the RELAP5/MOD3 multidimensional component model appears to be adequately predicting plenum mixing for a wide range of thermal-hydraulic conditions representative of plant transients
Evaluating fugacity models for trace components in landfill gas
Energy Technology Data Exchange (ETDEWEB)
Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk
2006-12-15
A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.
Traceable components of terrestrial carbon storage capacity in biogeochemical models.
Xia, Jianyang; Luo, Yiqi; Wang, Ying-Ping; Hararuk, Oleksandra
2013-07-01
Biogeochemical models have been developed to account for more and more processes, making their complex structures difficult to be understood and evaluated. Here, we introduce a framework to decompose a complex land model into traceable components based on mutually independent properties of modeled biogeochemical processes. The framework traces modeled ecosystem carbon storage capacity (Xss ) to (i) a product of net primary productivity (NPP) and ecosystem residence time (τE ). The latter τE can be further traced to (ii) baseline carbon residence times (τ'E ), which are usually preset in a model according to vegetation characteristics and soil types, (iii) environmental scalars (ξ), including temperature and water scalars, and (iv) environmental forcings. We applied the framework to the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model to help understand differences in modeled carbon processes among biomes and as influenced by nitrogen processes. With the climate forcings of 1990, modeled evergreen broadleaf forest had the highest NPP among the nine biomes and moderate residence times, leading to a relatively high carbon storage capacity (31.5 kg cm(-2) ). Deciduous needle leaf forest had the longest residence time (163.3 years) and low NPP, leading to moderate carbon storage (18.3 kg cm(-2) ). The longest τE in deciduous needle leaf forest was ascribed to its longest τ'E (43.6 years) and small ξ (0.14 on litter/soil carbon decay rates). Incorporation of nitrogen processes into the CABLE model decreased Xss in all biomes via reduced NPP (e.g., -12.1% in shrub land) or decreased τE or both. The decreases in τE resulted from nitrogen-induced changes in τ'E (e.g., -26.7% in C3 grassland) through carbon allocation among plant pools and transfers from plant to litter and soil pools. Our framework can be used to facilitate data model comparisons and model intercomparisons via tracking a few traceable components for all terrestrial carbon
Scale modeling flow-induced vibrations of reactor components
International Nuclear Information System (INIS)
Mulcahy, T.M.
1982-06-01
Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response
Two-component mixture cure rate model with spline estimated nonparametric components.
Wang, Lu; Du, Pang; Liang, Hua
2012-09-01
In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.
Modelling raster-based monthly water balance components for Europe
Energy Technology Data Exchange (ETDEWEB)
Ulmen, C.
2000-11-01
The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Bright, Molly G; Murphy, Kevin
2015-07-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.
Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).
Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars
2013-10-17
Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro
Three-Component Forward Modeling for Transient Electromagnetic Method
Directory of Open Access Journals (Sweden)
Bin Xiong
2010-01-01
Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.
Xu, Guo; Wing-Keung, Wong; Lixing, Zhu
2013-01-01
This paper investigates the impact of background risk on an investor’s portfolio choice in a mean-VaR, mean-CVaR and mean-variance framework, and analyzes the characterizations of the mean-variance boundary and mean-VaR efficient frontier in the presence of background risk. We also consider the case with a risk-free security.
Energy Technology Data Exchange (ETDEWEB)
Reimus, Paul W [Los Alamos National Laboratory
2010-12-08
A process-oriented modeling approach is implemented to examine the importance of parameter variances, correlation lengths, and especially cross-correlations in contaminant transport predictions over large scales. It is shown that the most important consideration is the correlation between flow rates and retardation processes (e.g., sorption, matrix diffusion) in the system. lf flow rates are negatively correlated with retardation factors in systems containing multiple flow pathways, then characterizing these negative correlation(s) may have more impact on reactive transport modeling than microscale information. Such negative correlations are expected in porous-media systems where permeability is negatively correlated with clay content and rock alteration (which are usually associated with increased sorption). Likewise, negative correlations are expected in fractured rocks where permeability is positively correlated with fracture apertures, which in turn are negatively correlated with sorption and matrix diffusion. Parameter variances and correlation lengths are also shown to have important effects on reactive transport predictions, but they are less important than parameter cross-correlations. Microscale information pertaining to contaminant transport has become more readily available as characterization methods and spectroscopic instrumentation have achieved lower detection limits, greater resolution, and better precision. Obtaining detailed mechanistic insights into contaminant-rock-water interactions is becoming a routine practice in characterizing reactive transport processes in groundwater systems (almost necessary for high-profile publications). Unfortunately, a quantitative link between microscale information and flow and transport parameter distributions or cross-correlations has not yet been established. One reason for this is that quantitative microscale information is difficult to obtain in complex, heterogeneous systems. So simple systems that lack the
Integrated modelling of the edge plasma and plasma facing components
International Nuclear Information System (INIS)
Coster, D.P.; Bonnin, X.; Mutzke, A.; Schneider, R.; Warrier, M.
2007-01-01
Modelling of the interaction between the edge plasma and plasma facing components (PFCs) has tended to place more emphasis on either the plasma or the PFCs. Either the PFCs do not change with time and the plasma evolution is studied, or the plasma is assumed to remain static and the detailed interaction of the plasma and the PFCs are examined, with no back-reaction on the plasma taken into consideration. Recent changes to the edge simulation code, SOLPS, now allow for changes in both the plasma and the PFCs to be considered. This has been done by augmenting the code to track the time-development of the properties of plasma facing components (PFCs). Results of standard mixed-materials scenarios (base and redeposited C; Be) are presented
Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A
2013-09-01
Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also
Validation of consistency of Mendelian sampling variance.
Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H
2018-03-01
Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic
The Variance Composition of Firm Growth Rates
Directory of Open Access Journals (Sweden)
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
DEFF Research Database (Denmark)
Mäntyniemi, Samu; Uusitalo, Laura; Peltonen, Heikki
2013-01-01
We developed a generic, age-structured, state-space stock assessment model that can be used as a platform for including information elicited from stakeholders. The model tracks the mean size-at-age and then uses it to explain rates of natural and ﬁshing mortality. The ﬁshery selectivity is divided...... to two components, which makes it possible to model the active seeking of the ﬂeet for certain sizes of ﬁsh, as well as the selectivity of the gear itself. The model can account for uncertainties that are not currently accounted for in state-of-the-art models for integrated assessments: (i) The form...... of the stock–recruitment function is considered uncertain and is accounted for by using Bayesian model averaging. (ii) In addition to recruitment variation, process variation in natural mortality, growth parameters, and ﬁshing mortality can also be treated as uncertain parameters...
Dynamic Mean-Variance Asset Allocation
Basak, Suleyman; Chabakauri, Georgy
2009-01-01
Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Two-component scattering model and the electron density spectrum
Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.
2010-02-01
In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.
A multi-component evaporation model for beam melting processes
Klassen, Alexander; Forster, Vera E.; Körner, Carolin
2017-02-01
In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.
Flexible Multibody Systems Models Using Composite Materials Components
International Nuclear Information System (INIS)
Neto, Maria Augusta; Ambr'osio, Jorge A. C.; Leal, Rog'erio Pereira
2004-01-01
The use of a multibody methodology to describe the large motion of complex systems that experience structural deformations enables to represent the complete system motion, the relative kinematics between the components involved, the deformation of the structural members and the inertia coupling between the large rigid body motion and the system elastodynamics. In this work, the flexible multibody dynamics formulations of complex models are extended to include elastic components made of composite materials, which may be laminated and anisotropic. The deformation of any structural member must be elastic and linear, when described in a coordinate frame fixed to one or more material points of its domain, regardless of the complexity of its geometry. To achieve the proposed flexible multibody formulation, a finite element model for each flexible body is used. For the beam composite material elements, the sections properties are found using an asymptotic procedure that involves a two-dimensional finite element analysis of their cross-section. The equations of motion of the flexible multibody system are solved using an augmented Lagrangian formulation and the accelerations and velocities are integrated in time using a multi-step multi-order integration algorithm based on the Gear method
Modeling for thermodynamic activities of components in simulated reprocessing solutions
International Nuclear Information System (INIS)
Sasahira, Akira; Hoshikawa, Tadahiro; Kawamura, Fumio
1992-01-01
Analyses of chemical reactions have been widely carried out for soluble fission products encountered in nuclear fuel reprocessing. For detailed analyses of reactions, a prediction of the activity or activity coefficient for nitric acid, water, and several nitrates of fission products is needed. An idea for the predicted nitric acid activity was presented earlier. The model, designated the hydration model, does not predict the nitrate activity. It did, however, suggest that the activity of water would be a function of nitric acid activity but not the molar fraction of water. If the activities of nitric acid and water are accurately predicted, the activity of the last component, nitrate, can be calculated using the Gibbs-Duhem relation for chemical potentials. Therefore, in this study, the earlier hydration model was modified to evaluate the water activity more accurately. The modified model was experimentally examined in stimulated reprocessing solutions. It is concluded that the modified model was suitable for water activity, but further improvement was needed for the activity evaluation of nitric acid in order to calculate the nitrate activity
Understanding science teacher enhancement programs: Essential components and a model
Spiegel, Samuel Albert
Researchers and practioners alike recognize that "the national goal that every child in the United States has access to high-quality school education in science and mathematics cannot be realized without the availability of effective professional development of teachers" (Hewson, 1997, p. 16). Further, there is a plethora of reports calling for the improvement of professional development efforts (Guskey & Huberman, 1995; Kyle, 1995; Loucks-Horsley, Hewson, Love, & Stiles, 1997). In this study I analyze a successful 3-year teacher enhancement program, one form of professional development, to: (1) identify essential components of an effective teacher enhancement program; and (2) create a model to identify and articulate the critical issues in designing, implementing, and evaluating teacher enhancement programs. Five primary sources of information were converted into data: (1) exit questionnaires, (2) exit surveys, (3) exit interview transcripts, (4) focus group transcripts, and (5) other artifacts. Additionally, a focus group was used to conduct member checks. Data were analyzed in an iterative process which led to the development of the list of essential components. The Components are categorized by three organizers: Structure (e.g., science research experience, a mediator throughout the program), Context (e.g., intensity, collaboration), and Participant Interpretation (e.g., perceived to be "safe" to examine personal beliefs and practices, actively engaged). The model is based on: (1) a 4-year study of a successful teacher enhancement program; (2) an analysis of professional development efforts reported in the literature; and (3) reflective discussions with implementors, evaluators, and participants of professional development programs. The model consists of three perspectives, cognitive, symbolic interaction, and organizational, representing different viewpoints from which to consider issues relevant to the success of a teacher enhancement program. These
Modeling Organic Contaminant Desorption from Municipal Solid Waste Components
Knappe, D. R.; Wu, B.; Barlaz, M. A.
2002-12-01
Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a
DEFF Research Database (Denmark)
Casas, Isabel; Mao, Xiuping; Veiga, Helena
This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...
Modeling photoionization of aqueous DNA and its components.
Pluhařová, Eva; Slavíček, Petr; Jungwirth, Pavel
2015-05-19
Radiation damage to DNA is usually considered in terms of UVA and UVB radiation. These ultraviolet rays, which are part of the solar spectrum, can indeed cause chemical lesions in DNA, triggered by photoexcitation particularly in the UVB range. Damage can, however, be also caused by higher energy radiation, which can ionize directly the DNA or its immediate surroundings, leading to indirect damage. Thanks to absorption in the atmosphere, the intensity of such ionizing radiation is negligible in the solar spectrum at the surface of Earth. Nevertheless, such an ionizing scenario can become dangerously plausible for astronauts or flight personnel, as well as for persons present at nuclear power plant accidents. On the beneficial side, ionizing radiation is employed as means for destroying the DNA of cancer cells during radiation therapy. Quantitative information about ionization of DNA and its components is important not only for DNA radiation damage, but also for understanding redox properties of DNA in redox sensing or labeling, as well as charge migration along the double helix in nanoelectronics applications. Until recently, the vast majority of experimental and computational data on DNA ionization was pertinent to its components in the gas phase, which is far from its native aqueous environment. The situation has, however, changed for the better due to the advent of photoelectron spectroscopy in liquid microjets and its most recent application to photoionization of aqueous nucleosides, nucleotides, and larger DNA fragments. Here, we present a consistent and efficient computational methodology, which allows to accurately evaluate ionization energies and model photoelectron spectra of aqueous DNA and its individual components. After careful benchmarking, the method based on density functional theory and its time-dependent variant with properly chosen hybrid functionals and polarizable continuum solvent model provides ionization energies with accuracy of 0.2-0.3 e
On combined gravity gradient components modelling for applied geophysics
International Nuclear Information System (INIS)
Veryaskin, Alexey; McRae, Wayne
2008-01-01
Gravity gradiometry research and development has intensified in recent years to the extent that technologies providing a resolution of about 1 eotvos per 1 second average shall likely soon be available for multiple critical applications such as natural resources exploration, oil reservoir monitoring and defence establishment. Much of the content of this paper was composed a decade ago, and only minor modifications were required for the conclusions to be just as applicable today. In this paper we demonstrate how gravity gradient data can be modelled, and show some examples of how gravity gradient data can be combined in order to extract valuable information. In particular, this study demonstrates the importance of two gravity gradient components, Txz and Tyz, which, when processed together, can provide more information on subsurface density contrasts than that derived solely from the vertical gravity gradient (Tzz)
Component vibration of VVER-reactors - diagnostics and modelling
International Nuclear Information System (INIS)
Altstadt, E.; Scheffler, M.; Weiss, F.-P.
1995-01-01
Flow induced vibrations of reactor pressure vessel (RPV) internals (control element and core barrel motions) at VVER-440 reactors have led to the development of dedicated methods for on-line monitoring. These methods need a certain developed stage of the faults to be detected. To achieve a real sensitive early detection of mechanical faults of RPV internals, a theoretical vibration model was developed based on finite elements. The model comprises the whole primary circuit including the steam generators (SG). By means of that model all eigenfrequencies up to 30 Hz and the corresponding mode shapes were calculated for the normal vibration behaviour. Moreover the shift of eigenfrequencies and of amplitudes due to the degradation or to the failure of internal clamping and spring elements could be investigated, showing that a recognition of such degradations even inside the RPV is possible by pure excore vibration measurements. A true diagnostic, that is the identification of the failed component, might become possible because different faults influence different and well separated eigenfrequencies. (author)
Variance in binary stellar population synthesis
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Estimating quadratic variation using realized variance
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Directory of Open Access Journals (Sweden)
HEYDER DINIZ SILVA
2000-01-01
(fourth analysis. The results showed that the intrablock model, must be used with adjusted treatments in lattice experiments to estimate variance components every time that the relative efficiency of the lattice design, relatively to the randomized complete blocks design be upper to 100%, and in the opposite case the randomized complete blocks design model must be used. The fourth alternative of analysis must not be recommended in both situations.
BWR Refill-Reflood Program, Task 4.7 - model development: TRAC-BWR component models
International Nuclear Information System (INIS)
Cheung, Y.K.; Parameswaran, V.; Shaug, J.C.
1983-09-01
TRAC (Transient Reactor Analysis Code) is a computer code for best-estimate analysis for the thermal hydraulic conditions in a reactor system. The development and assessment of the BWR component models developed under the Refill/Reflood Program that are necessary to structure a BWR-version of TRAC are described in this report. These component models are the jet pump, steam separator, steam dryer, two-phase level tracking model, and upper-plenum mixing model. These models have been implemented into TRAC-B02. Also a single-channel option has been developed for individual fuel-channel analysis following a system-response calculation
Stochastic Models of Defects in Wind Turbine Drivetrain Components
DEFF Research Database (Denmark)
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard
2013-01-01
The drivetrain in a wind turbine nacelle typically consists of a variety of heavily loaded components, like the main shaft, bearings, gearbox and generator. The variations in environmental load challenge the performance of all the components of the drivetrain. Failure of each of these components...
Exploring a minimal two-component p53 model
International Nuclear Information System (INIS)
Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei
2010-01-01
The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks
Modeling and validation of existing VAV system components
Energy Technology Data Exchange (ETDEWEB)
Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada)
2004-07-01
The optimization of supervisory control strategies and local-loop controllers can improve the performance of HVAC (heating, ventilating, air-conditioning) systems. In this study, the component model of the fan, the damper and the cooling coil were developed and validated against monitored data of an existing variable air volume (VAV) system installed at Montreal's Ecole de Technologie Superieure. The measured variables that influence energy use in individual HVAC models included: (1) outdoor and return air temperature and relative humidity, (2) supply air and water temperatures, (3) zone airflow rates, (4) supply duct, outlet fan, mixing plenum static pressures, (5) fan speed, and (6) minimum and principal damper and cooling and heating coil valve positions. The additional variables that were considered, but not measured were: (1) fan and outdoor airflow rate, (2) inlet and outlet cooling coil relative humidity, and (3) liquid flow rate through the heating or cooling coils. The paper demonstrates the challenges of the validation process when monitored data of existing VAV systems are used. 7 refs., 11 figs.
Parameter estimation of component reliability models in PSA model of Krsko NPP
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Vrbanic, I.
2001-01-01
In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)
Shestakova, Tatiana A; Aguilera, Mònica; Ferrio, Juan Pedro; Gutiérrez, Emilia; Voltas, Jordi
2014-08-01
Identifying how physiological responses are structured across environmental gradients is critical to understanding in what manner ecological factors determine tree performance. Here, we investigated the spatiotemporal patterns of signal strength of carbon isotope discrimination (Δ(13)C) and oxygen isotope composition (δ(18)O) for three deciduous oaks (Quercus faginea (Lam.), Q. humilis Mill. and Q. petraea (Matt.) Liebl.) and one evergreen oak (Q. ilex L.) co-occurring in Mediterranean forests along an aridity gradient. We hypothesized that contrasting strategies in response to drought would lead to differential climate sensitivities between functional groups. Such differential sensitivities could result in a contrasting imprint on stable isotopes, depending on whether the spatial or temporal organization of tree-ring signals was analysed. To test these hypotheses, we proposed a mixed modelling framework to group isotopic records into potentially homogeneous subsets according to taxonomic or geographical criteria. To this end, carbon and oxygen isotopes were modelled through different variance-covariance structures for the variability among years (at the temporal level) or sites (at the spatial level). Signal-strength parameters were estimated from the outcome of selected models. We found striking differences between deciduous and evergreen oaks in the organization of their temporal and spatial signals. Therefore, the relationships with climate were examined independently for each functional group. While Q. ilex exhibited a large spatial dependence of isotopic signals on the temperature regime, deciduous oaks showed a greater dependence on precipitation, confirming their higher susceptibility to drought. Such contrasting responses to drought among oak types were also observed at the temporal level (interannual variability), with stronger associations with growing-season water availability in deciduous oaks. Thus, our results indicate that Mediterranean deciduous
Directory of Open Access Journals (Sweden)
Jensen Just
2002-05-01
Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.
Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.
2018-02-01
In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Two-component network model in voice identification technologies
Directory of Open Access Journals (Sweden)
Edita K. Kuular
2018-03-01
Full Text Available Among the most important parameters of biometric systems with voice modalities that determine their effectiveness, along with reliability and noise immunity, a speed of identification and verification of a person has been accentuated. This parameter is especially sensitive while processing large-scale voice databases in real time regime. Many research studies in this area are aimed at developing new and improving existing algorithms for presentation and processing voice records to ensure high performance of voice biometric systems. Here, it seems promising to apply a modern approach, which is based on complex network platform for solving complex massive problems with a large number of elements and taking into account their interrelationships. Thus, there are known some works which while solving problems of analysis and recognition of faces from photographs, transform images into complex networks for their subsequent processing by standard techniques. One of the first applications of complex networks to sound series (musical and speech analysis are description of frequency characteristics by constructing network models - converting the series into networks. On the network ontology platform a previously proposed technique of audio information representation aimed on its automatic analysis and speaker recognition has been developed. This implies converting information into the form of associative semantic (cognitive network structure with amplitude and frequency components both. Two speaker exemplars have been recorded and transformed into pertinent networks with consequent comparison of their topological metrics. The set of topological metrics for each of network models (amplitude and frequency one is a vector, and together those combine a matrix, as a digital "network" voiceprint. The proposed network approach, with its sensitivity to personal conditions-physiological, psychological, emotional, might be useful not only for person identification
Using variance structure to quantify responses to perturbation in fish catches
Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.
2017-01-01
We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Model validation and calibration based on component functions of model output
International Nuclear Information System (INIS)
Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei
2015-01-01
The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods
Treur, M.; Postma, M.
2014-01-01
Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using
An ontology for component-based models of water resource systems
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
Regional sensitivity analysis using revised mean and variance ratio functions
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen
2014-01-01
The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the
A probabilistic model for component-based shape synthesis
Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen
2012-01-01
represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation
Means and Variances without Calculus
Kinney, John J.
2005-01-01
This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.
Measurement and Modelling of MIC Components Using Conductive Lithographic Films
Shepherd, P. R.; Taylor, C.; Evans l, P. S. A.; Harrison, D. J.
2001-01-01
Conductive Lithographic Films (CLFs) have previously demonstrated useful properties in printed mi-crowave circuits, combining low cost with high speed of manufacture. In this paper we examine the formation of various passive components via the CLF process, which enables further integration of printed microwave integrated circuits. The printed components include vias, resistors and overlay capacitors, and offer viable alternatives to traditional manufacturing processes for Microwave Inte-grate...
Bright, Molly G.; Murphy, Kevin
2015-01-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured “signal” as well as “noise.” Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. PMID:25862264
A three-component analytic model of long-term climate change
Pratt, V. R.
2011-12-01
On the premise that fast climate fluctuations up to and including the 11-year solar cycle play a negligible role in long-term climate forecasting, we remove these from the 160-year HADCRUT3 global land-sea temperature record and model the result as the sum of a log-raised-exponential (log(b+exp(t))) and two sine waves of respective periods 56 and 75 years coinciding in phase in 1925. The latter two can be understood equivalently as a 62-year-period "carrier" modulated with a 440-year period that peaked in 1925 and vanished in 1705. This model gives an excellent fit, explaining 98% of the variance (r^2) of long-term climate over the 160 years. We derive the first component as the composition of Arrhenius's 1896 logarithmic dependence of surface temperature on CO2 with Hofmann's 2009 raised-exponential dependence of CO2 on time, but interpret its fit to the data as the net anthropogenic contribution incorporating all greenhouse and aerosol emissions and relevant feedbacks, bearing in mind the rapid growth in both population and technology. The 56-year oscillation matches the largest component of the Atlantic Multidecadal Oscillation, while the 75-year one is near an oscillation often judged to be in the vicinity of 70 years. The expected 1705 cancellation is about two decades earlier than suggested by Gray et al's tree-ring proxy for the AMO during 1567-1990 [Gray GPL 31, L12205]. While there is no consensus on the origin of ocean oscillations, the oscillations in geomagnetic secular variation noted by Nagata and Rimitake in 1963 and Slaucitajs and Winch in 1965, of respective periods 77 years and 61 years, correspond strikingly with our 76-year oscillation and 62-year "carrier." This model has a number of benefits. Simplicity. It is easily explained to a lay audience in response to the frequently voiced concern that the temperature record is poorly correlated with the CO2 record alone. It shows that the transition from natural to anthropogenic influences on long
Variance swap payoffs, risk premia and extreme market conditions
DEFF Research Database (Denmark)
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Design and Application of an Ontology for Component-Based Modeling of Water Systems
Elag, M.; Goodall, J. L.
2012-12-01
Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.
Directory of Open Access Journals (Sweden)
João Batista Duarte
2001-09-01
Full Text Available O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância, ML (máxima verossimilhança, REML (máxima verossimilhança restrita e MIVQUE(0 (estimador quadrático não viesado de variância mínima, no delineamento de blocos aumentados com tratamentos adicionais (progênies de uma ou mais procedências (cruzamentos. Os resultados indicaram superioridade relativa do método MIVQUE(0. O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental ( e a superestimar as variâncias genotípicas (, em especial nos experimentos de menor tamanho (n/>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.This work compares by simulation estimates of variance components produced by the ANOVA (analysis of variance, ML (maximum likelihood, REML (restricted maximum likelihood, and MIVQUE(0 (minimum variance quadratic unbiased estimator methods for augmented block design with additional treatments (progenies stemming from one or more origins (crosses. Results showed the superiority of the MIVQUE(0 estimation. The ANOVA method, although unbiased, showed estimates with lower precision. The ML and REML methods produced downwards biased estimates for error variance (, and upwards biased estimates for genotypic variances (, particularly the ML method. Biases for the REML estimation became negligible when progenies were derived from a single cross, and experiments were of larger size with ratios />0.5. This method, however, provided the worst estimates for genotypic variances when progenies were derived from several crosses and the experiments were of small size (n<120 observations.
Chiu, Ming Ming; McBride-Chang, Catherine; Lin, Dan
2012-01-01
The authors tested the component model of reading (CMR) among 186,725 fourth grade students from 38 countries (45 regions) on five continents by analyzing the 2006 Progress in International Reading Literacy Study data using measures of ecological (country, family, school, teacher), psychological, and cognitive components. More than 91% of the differences in student difficulty occurred at the country (61%) and classroom (30%) levels (ecological), with less than 9% at the student level (cognitive and psychological). All three components were negatively associated with reading difficulties: cognitive (student's early literacy skills), ecological (family characteristics [socioeconomic status, number of books at home, and attitudes about reading], school characteristics [school climate and resources]), and psychological (students' attitudes about reading, reading self-concept, and being a girl). These results extend the CMR by demonstrating the importance of multiple levels of factors for reading deficits across diverse cultures.
Energy Technology Data Exchange (ETDEWEB)
Fleming, K.; Long, N.; Swindler, A.
2012-05-01
This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.
New approaches to the modelling of multi-component fuel droplet heating and evaporation
Sazhin, Sergei S
2015-02-25
The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.
Ge, Jing; Zhang, Guoping
2015-01-01
Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.
Partitioning of the variance in the growth parameters of Erwinia carotovora on vegetable products.
Shorten, P R; Membré, J-M; Pleasants, A B; Kubaczka, M; Soboleva, T K
2004-06-01
The objective of this paper was to estimate and partition the variability in the microbial growth model parameters describing the growth of Erwinia carotovora on pasteurised and non-pasteurised vegetable juice from laboratory experiments performed under different temperature-varying conditions. We partitioned the model parameter variance and covariance components into effects due to temperature profile and replicate using a maximum likelihood technique. Temperature profile and replicate were treated as random effects and the food substrate was treated as a fixed effect. The replicate variance component was small indicating a high level of control in this experiment. Our analysis of the combined E. carotovora growth data sets used the Baranyi primary microbial growth model along with the Ratkowsky secondary growth model. The variability in the microbial growth parameters estimated from these microbial growth experiments is essential for predicting the mean and variance through time of the E. carotovora population size in a product supply chain and is the basis for microbiological risk assessment and food product shelf-life estimation. The variance partitioning made here also assists in the management of optimal product distribution networks by identifying elements of the supply chain contributing most to product variability. Copyright 2003 Elsevier B.V.
MODELING OF SYSTEM COMPONENTS OF EDUCATIONAL PROGRAMS IN HIGH SCHOOL
Directory of Open Access Journals (Sweden)
E. K. Samerkhanova
2016-01-01
Full Text Available Based on the principles of System Studies, describes the components of the educational programs of the control system. Educational Program Management is a set of substantive, procedural, resource, subject-activity, efficiently and evaluation components, which ensures the integrity of integration processes at all levels of education. Ensuring stability and development in the management of educational programs is achieved by identifying and securing social norms, the status of the educational institution program managers to ensure the achievement of modern quality of education.Content Management provides the relevant educational content in accordance with the requirements of the educational and professional standards; process control ensures the efficient organization of rational distribution process flows; Resource Management provides optimal distribution of personnel, information and methodological, material and technical equipment of the educational program; contingent management provides subject-activity interaction of participants of the educational process; quality control ensures the quality of educational services.
Implementing components of the routines-based model
McWilliam, Robin; Fernández Valero, Rosa
2015-01-01
The MBR is comprised of 17 components that can generally be grouped into practices related to (a) functional assessment and intervention planning (for example, Routines-Based Interview), (b) organization of services (including location and staffing), (c) service delivery to children and families (using a consultative approach with families and teachers, integrated therapy), (d) classroom organization (for example, classroom zones), and (e) supervision and training through ch...
Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex
Energy Technology Data Exchange (ETDEWEB)
Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.
1994-08-01
The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.
Effect of Model Selection on Computed Water Balance Components
Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.
2009-01-01
Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration
Exploring component-based approaches in forest landscape modeling
H. S. He; D. R. Larsen; D. J. Mladenoff
2002-01-01
Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...
Scalable Power-Component Models for Concept Testing
2011-08-17
motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN
Modeling dynamics of biological and chemical components of aquatic ecosystems
International Nuclear Information System (INIS)
Lassiter, R.R.
1975-05-01
To provide capability to model aquatic ecosystems or their subsystems as needed for particular research goals, a modeling strategy was developed. Submodels of several processes common to aquatic ecosystems were developed or adapted from previously existing ones. Included are submodels for photosynthesis as a function of light and depth, biological growth rates as a function of temperature, dynamic chemical equilibrium, feeding and growth, and various types of losses to biological populations. These submodels may be used as modules in the construction of models of subsystems or ecosystems. A preliminary model for the nitrogen cycle subsystem was developed using the modeling strategy and applicable submodels. (U.S.)
Directory of Open Access Journals (Sweden)
Yihang Yin
2015-08-01
Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
A three-component, hierarchical model of executive attention
Whittle, Sarah; Pantelis, Christos; Testa, Renee; Tiego, Jeggan; Bellgrove, Mark
2017-01-01
Executive attention refers to the goal-directed control of attention. Existing models of executive attention distinguish between three correlated, but empirically dissociable, factors related to selectively attending to task-relevant stimuli (Selective Attention), inhibiting task-irrelevant responses (Response Inhibition), and actively maintaining goal-relevant information (Working Memory Capacity). In these models, Selective Attention and Response Inhibition are moderately strongly correlate...
Economic Modeling as a Component of Academic Strategic Planning.
MacKinnon, Joyce; Sothmann, Mark; Johnson, James
2001-01-01
Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)
Component vibration of VVER-reactors - diagnostics and modelling
International Nuclear Information System (INIS)
Altstadt, E.; Scheffler, M.; Weiss, F.P.
1994-01-01
The model comprises the whole primary circuit, including steam generators, loops, coolant pumps, main isolating valves and certainly the reactor pressure vessel and its internals. It was developed using the finite-element-code ANSYS. The model has a modular structure, so that various operational and assembling states can easily be considered. (orig./DG)
PyCatch: Component based hydrological catchment modelling
Lana-Renault, N.; Karssenberg, D.J.
2013-01-01
Dynamic numerical models are powerful tools for representing and studying environmental processes through time. Usually they are constructed with environmental modelling languages, which are high-level programming languages that operate at the level of thinking of the scientists. In this paper we
Modeling and Analysis of Component Faults and Reliability
DEFF Research Database (Denmark)
Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter
2016-01-01
This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....
Multiparticle production in a two-component dual parton model
International Nuclear Information System (INIS)
Aurenche, P.; Bopp, F.W.; Capella, A.; Kwiecinski, J.; Maire, M.; Ranft, J.; Tran Thanh Van, J.
1992-01-01
The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data
Comprehensive FDTD modelling of photonic crystal waveguide components
DEFF Research Database (Denmark)
Lavrinenko, Andrei; Borel, Peter Ingo; Frandsen, Lars Hagedorn
2004-01-01
Planar photonic crystal waveguide structures have been modelled using the finite-difference-time-domain method and perfectly matched layers have been employed as boundary conditions. Comprehensive numerical calculations have been performed and compared to experimentally obtained transmission...
Genetic factors explain half of all variance in serum eosinophil cationic protein
DEFF Research Database (Denmark)
Elmose, Camilla; Sverrild, Asger; van der Sluis, Sophie
2014-01-01
with variation in serum ECP and to determine the relative proportion of the variation in ECP due to genetic and non-genetic factors, in an adult twin sample. METHODS: A sample of 575 twins, selected through a proband with self-reported asthma, had serum ECP, lung function, airway responsiveness to methacholine......, exhaled nitric oxide, and skin test reactivity, measured. Linear regression analysis and variance component models were used to study factors associated with variation in ECP and the relative genetic influence on ECP levels. RESULTS: Sex (regression coefficient = -0.107, P ... was statistically non-significant (r = -0.11, P = 0.50). CONCLUSION: Around half of all variance in serum ECP is explained by genetic factors. Serum ECP is influenced by sex, BMI, and airway responsiveness. Serum ECP and airway responsiveness seem not to share genetic variance....
New methods for the characterization of pyrocarbon; The two component model of pyrocarbon
Energy Technology Data Exchange (ETDEWEB)
Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.
1972-04-19
In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.
System level modeling and component level control of fuel cells
Xue, Xingjian
This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Energy Technology Data Exchange (ETDEWEB)
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
International Nuclear Information System (INIS)
Ankirchner, Stefan; Dermoune, Azzouz
2011-01-01
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Bammann, K; Huybrechts, I; Vicente-Rodriguez, G; Easton, C; De Vriendt, T; Marild, S; Mesana, M I; Peeters, M W; Reilly, J J; Sioen, I; Tubic, B; Wawro, N; Wells, J C; Westerterp, K; Pitsiladis, Y; Moreno, L A
2013-04-01
To compare different field methods for estimating body fat mass with a reference value derived by a three-component (3C) model in pre-school and school children across Europe. Multicentre validation study. Seventy-eight preschool/school children aged 4-10 years from four different European countries. A standard measurement protocol was carried out in all children by trained field workers. A 3C model was used as the reference method. The field methods included height and weight measurement, circumferences measured at four sites, skinfold measured at two-six sites and foot-to-foot bioelectrical resistance (BIA) via TANITA scales. With the exception of height and neck circumference, all single measurements were able to explain at least 74% of the fat-mass variance in the sample. In combination, circumference models were superior to skinfold models and height-weight models. The best predictions were given by trunk models (combining skinfold and circumference measurements) that explained 91% of the observed fat-mass variance. The optimal data-driven model for our sample includes hip circumference, triceps skinfold and total body mass minus resistance index, and explains 94% of the fat-mass variance with 2.44 kg fat mass limits of agreement. In all investigated models, prediction errors were associated with fat mass, although to a lesser degree in the investigated skinfold models, arm models and the data-driven models. When studying total body fat in childhood populations, anthropometric measurements will give biased estimations as compared to gold standard measurements. Nevertheless, our study shows that when combining circumference and skinfold measurements, estimations of fat mass can be obtained with a limit of agreement of 1.91 kg in normal weight children and of 2.94 kg in overweight or obese children.
The Genealogical Consequences of Fecundity Variance Polymorphism
Taylor, Jesse E.
2009-01-01
The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628
Modelling and forecasting WIG20 daily returns
DEFF Research Database (Denmark)
Amado, Cristina; Silvennoinen, Annestiina; Terasvirta, Timo
of the model is that the deterministic component is specified before estimating the multiplicative conditional variance component. The resulting model is subjected to misspecification tests and its forecasting performance is compared with that of commonly applied models of conditional heteroskedasticity....
A Bayesian Analysis of Unobserved Component Models Using Ox
Directory of Open Access Journals (Sweden)
Charles S. Bos
2011-05-01
Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.
Mass models for disk and halo components in spiral galaxies
International Nuclear Information System (INIS)
Athanassoula, E.; Bosma, A.
1987-01-01
The mass distribution in spiral galaxies is investigated by means of numerical simulations, summarizing the results reported by Athanassoula et al. (1986). Details of the modeling technique employed are given, including bulge-disk decomposition; computation of bulge and disk rotation curves (assuming constant mass/light ratios for each); and determination (for spherical symmetry) of the total halo mass out to the optical radius, the concentration indices, the halo-density power law, the core radius, the central density, and the velocity dispersion. Also discussed are the procedures for incorporating galactic gas and checking the spiral structure extent. It is found that structural constraints limit disk mass/light ratios to a range of 0.3 dex, and that the most likely models are maximum-disk models with m = 1 disturbances inhibited. 19 references
Modeling of a remote inspection system for NSSS components
International Nuclear Information System (INIS)
Choi, Yoo Rark; Kim, Jae Hee; Lee, Jae Cheol
2003-03-01
Safety inspection for safety-critical unit of nuclear power plant has been processed using off-line technology. Thus we can not access safety inspection system and inspection data via network such as internet. We are making an on-line control and data access system based on WWW and JAVA technologies which can be used during plant operation to overcome these problems. Users can access inspection systems and inspection data only using web-browser. This report discusses about analysis of the existing remote system and essential techniques such as Web, JAVA, client/server model, and multi-tier model. This report also discusses about a system modeling that we have been developed using these techniques and provides solutions for developing an on-line control and data access system
Three-Component Dust Models for Interstellar Extinction C ...
Indian Academy of Sciences (India)
without standard' method were used to constrain the dust characteristics in the mean ISM (RV = 3.1), ... Interstellar dust models have evolved as the observational data have advanced, and the most popular dust ... distribution comes from the IRAS observation which shows an excess of 12 μ and. 25 μ emission from the ISM ...
Soil Structure - A Neglected Component of Land-Surface Models
Fatichi, S.; Or, D.; Walko, R. L.; Vereecken, H.; Kollet, S. J.; Young, M.; Ghezzehei, T. A.; Hengl, T.; Agam, N.; Avissar, R.
2017-12-01
Soil structure is largely absent in most standard sampling and measurements and in the subsequent parameterization of soil hydraulic properties deduced from soil maps and used in Earth System Models. The apparent omission propagates into the pedotransfer functions that deduce parameters of soil hydraulic properties primarily from soil textural information. Such simple parameterization is an essential ingredient in the practical application of any land surface model. Despite the critical role of soil structure (biopores formed by decaying roots, aggregates, etc.) in defining soil hydraulic functions, only a few studies have attempted to incorporate soil structure into models. They mostly looked at the effects on preferential flow and solute transport pathways at the soil profile scale; yet, the role of soil structure in mediating large-scale fluxes remains understudied. Here, we focus on rectifying this gap and demonstrating potential impacts on surface and subsurface fluxes and system wide eco-hydrologic responses. The study proposes a systematic way for correcting the soil water retention and hydraulic conductivity functions—accounting for soil-structure—with major implications for near saturated hydraulic conductivity. Modification to the basic soil hydraulic parameterization is assumed as a function of biological activity summarized by Gross Primary Production. A land-surface model with dynamic vegetation is used to carry out numerical simulations with and without the role of soil-structure for 20 locations characterized by different climates and biomes across the globe. Including soil structure affects considerably the partition between infiltration and runoff and consequently leakage at the base of the soil profile (recharge). In several locations characterized by wet climates, a few hundreds of mm per year of surface runoff become deep-recharge accounting for soil-structure. Changes in energy fluxes, total evapotranspiration and vegetation productivity
Feedback loops and temporal misalignment in component-based hydrologic modeling
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Adjustment of heterogenous variances and a calving year effect in ...
African Journals Online (AJOL)
Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Variance based OFDM frame synchronization
Directory of Open Access Journals (Sweden)
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Component-based modeling of systems for automated fault tree generation
International Nuclear Information System (INIS)
Majdara, Aref; Wakabayashi, Toshio
2009-01-01
One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented
Research on development model of nuclear component based on life cycle management
International Nuclear Information System (INIS)
Bao Shiyi; Zhou Yu; He Shuyan
2005-01-01
At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)
Li, Yang; Pirvu, Traian A
2011-01-01
This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
Five-component propagation model for steam explosion analysis
International Nuclear Information System (INIS)
Yang, Y.; Moriyama, Kiyofumi; Park, H.S.; Maruyama, Yu; Sugimoto, Jun
1999-01-01
A five-field simulation code JASMINE-pro has been developed at JAERI for the calculation of the propagation and explosion phase of steam explosions. The basic equations and the constitutive relationships specifically utilized in the propagation models in the code are introduced in this paper. Some calculations simulating the KROTOS 1D and 2D steam explosion experiments are also stated in the paper to show the present capability of the code. (author)
Component-oriented approach to the development and use of numerical models in high energy physics
International Nuclear Information System (INIS)
Amelin, N.S.; Komogorov, M.Eh.
2002-01-01
We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users
Mathematical Model for Multicomponent Adsorption Equilibria Using Only Pure Component Data
DEFF Research Database (Denmark)
Marcussen, Lis
2000-01-01
A mathematical model for nonideal adsorption equilibria in multicomponent mixtures is developed. It is applied with good results for pure substances and for prediction of strongly nonideal multicomponent equilibria using only pure component data. The model accounts for adsorbent...
International Nuclear Information System (INIS)
Carl Stern; Martin Lee
1999-01-01
Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models
Carl-Stern
1999-01-01
Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.
Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.
2011-12-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
International Nuclear Information System (INIS)
Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H; Petroni, A; Urquina, H; Manes, F; Ibáñez, A
2011-01-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
Energy Technology Data Exchange (ETDEWEB)
Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H [Laboratory of Industrial Electronics, Control and Instrumentation (LEICI), National University of La Plata (Argentina); Petroni, A [Integrative Neuroscience Laboratory, Physics Department, University of Buenos Aires, Buenos Aires (Argentina); Urquina, H; Manes, F; Ibanez, A [Institute of Cognitive Neurology (INECO) and Institute of Neuroscience, Favaloro University, Buenos Aires (Argentina)
2011-12-23
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
Three Fundamental Components of the Autopoiesic Leadership Model
Directory of Open Access Journals (Sweden)
Mateja Kalan
2017-06-01
Full Text Available Research Question (RQ: What type of leadership could be developed upon transformational leadership? Purpose: The purpose of the research was to create a new leadership style. Its variables can be further developed upon transformational leadership variables. Namely, this leadership style is known as a successful leadership style in successful organisations. Method: In the research of published papers from scientific databases, we relied on the triangulation of theories. To clarify the research question, we have researched different authors, who based their research papers on different hypotheses. In some articles, hypotheses were even contradictory. Results: Through the research, we have concluded that authors often changed certain variables when researching the topic of transformational leadership. We have correlated these variables and developed a new model, naming it autopoiesic leadership. Its main variables are (1 goal orientation, (2 emotional sensitivity, and (3 manager’s flexibility in organisations. Organisation: Our research can have a positive effect on managers in terms of recognising the importance of selected variables. Practical application of autopoiesic leadership can imply more efficiency in business processes of a company, increasing its financial performance. Society: Autopoiesic leadership is a leadership style that largely influences the use of the individual’s internal resources. Thus, she or he becomes internally motivated, and this is the basis for quality work. This strengthens employees’ social aspect which consequently also has a positive effect on their life outside the organisational system, i.e. their family and broader living environment. Originality: In the worldwide literature, we have noticed the concept autopoiesis in papers about management subjects, but the autopoiesic leadership model has not been developed so far. Limitations / Future Research: We based our research on the triangulation of theories
A two-component dark matter model with real singlet scalars ...
Indian Academy of Sciences (India)
2016-01-05
Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
Energy Technology Data Exchange (ETDEWEB)
Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2006-07-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
A zero-variance-based scheme for variance reduction in Monte Carlo criticality
International Nuclear Information System (INIS)
Christoforou, S.; Hoogenboom, J. E.
2006-01-01
A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)
Modelling insights on the partition of evapotranspiration components across biomes
Fatichi, Simone; Pappas, Christoforos
2017-04-01
Recent studies using various methodologies have found a large variability (from 35 to 90%) in the ratio of transpiration to total evapotranspiration (denoted as T:ET) across biomes or even at the global scale. Concurrently, previous results suggest that T:ET is independent of mean precipitation and has a positive correlation with Leaf Area Index (LAI). We used the mechanistic ecohydrological model, T&C, with a refined process-based description of soil resistance and a detailed treatment of canopy biophysics and ecophysiology, to investigate T:ET across multiple biomes. Contrary to observation-based estimates, simulation results highlight a well-constrained range of mean T:ET across biomes that is also robust to perturbations of the most sensitive parameters. Simulated T:ET was confirmed to be independent of average precipitation, while it was found to be uncorrelated with LAI across biomes. Higher values of LAI increase evaporation from interception but suppress ground evaporation with the two effects largely cancelling each other in many sites. These results offer mechanistic, model-based, evidence to the ongoing research about the range of T:ET and the factors affecting its magnitude across biomes.
(Co) variance Components and Genetic Parameter Estimates for Re
African Journals Online (AJOL)
Mapula
The magnitude of heritability estimates obtained in the current study ... traits were recently introduced to supplement progeny testing programmes or for usage as sole source of ..... VCE-5 User's Guide and Reference Manual Version 5.1.
Genetic variance of sunflower yield components - Heliantus annuus L.
Directory of Open Access Journals (Sweden)
Hladni Nada
2003-01-01
Full Text Available The main goals of sunflower breeding in Yugoslavia and abroad are increased seed yield and oil content per unit area and increased resistance to diseases, insects and stress conditions via an optimization of plant architecture. In order to determine the mode of inheritance, gene effects and correlations of total leaf number per plant, total leaf area and plant height, six genetically divergent inbred lines of sunflower were subjected to half diallel crosses. Significant differences in mean values of all the traits were found in the F1 and F2 generations. Additive gene effects were more important in the inheritance of total leaf number per plant and plant height, while in the case of total leaf area per plant the nonadditive ones were more important looking at all the combinations in the F1 and F2 generations. The average degree of dominance (Hi/D1/2 was lower than one for total leaf number per plant and plant height, so the mode of inheritance was partial dominance, while with total leaf area the value was higher than one, indicating super dominance as the mode of inheritance. Significant positive correlation was found: between total leaf area per plant and total leaf number per plant (0.285* and plant height (0.278*. The results of the study are of importance for further sunflower breeding work.
Virtual Models Linked with Physical Components in Construction
DEFF Research Database (Denmark)
Sørensen, Kristian Birch
The use of virtual models supports a fundamental change in the working practice of the construction industry. It changes the primary information carrier (drawings) from simple manually created depictions of the building under construction to visually realistic digital representations that also...... engineering and business development in an iterative and user needs centred system development process. The analysis of future business perspectives presents an extensive number of new working processes that can assist in solving major challenges in the construction industry. Three of the most promising...... practices and development of new ontologies. Based on the experiences gained in this PhD project, some of the important future challenges are also to show the benefits of using modern information and communication technology to practitioners in the construction industry and to communicate this knowledge...
Reliability analysis of nuclear component cooling water system using semi-Markov process model
International Nuclear Information System (INIS)
Veeramany, Arun; Pandey, Mahesh D.
2011-01-01
Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
International Nuclear Information System (INIS)
Araujo, Janeo Severino C. de; Dantas, Carlos Costa; Santos, Valdemir A. dos; Souza, Jose Edson G. de; Luna-Finkler, Christine L.
2009-01-01
The fluid dynamic behavior of riser of a cold flow model of a Fluid Catalytic Cracking Unit (FCCU) was investigated. The experimental data were obtained by the nuclear technique of gamma transmission. A gamma source was placed diametrically opposite to a detector in any straight section of the riser. The gas-solid flow through riser was monitored with a source of Americium-241 what allowed obtaining information of the axial solid concentration without flow disturbance and also identifying the dependence of this concentration profile with several independent variables. The MatLab R and Statistica R software were used. Statistica tool employed was the Principal Components Analysis (PCA), that consisted of the job of the data organization, through two-dimensional head offices to allow extract relevant information about the importance of the independent variables on axial solid concentration in a cold flow riser. The variables investigated were mass flow rate of solid, mass flow rate of gas, pressure in the riser base and the relative height in the riser. The first two components reached about 98 % of accumulated percentage of explained variance. (author)
International Nuclear Information System (INIS)
Reynolds, Jacob G.
2013-01-01
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components
Modelling the effect of mixture components on permeation through skin.
Ghafourian, T; Samaras, E G; Brooks, J D; Riviere, J E
2010-10-15
A vehicle influences the concentration of penetrant within the membrane, affecting its diffusivity in the skin and rate of transport. Despite the huge amount of effort made for the understanding and modelling of the skin absorption of chemicals, a reliable estimation of the skin penetration potential from formulations remains a challenging objective. In this investigation, quantitative structure-activity relationship (QSAR) was employed to relate the skin permeation of compounds to the chemical properties of the mixture ingredients and the molecular structures of the penetrants. The skin permeability dataset consisted of permeability coefficients of 12 different penetrants each blended in 24 different solvent mixtures measured from finite-dose diffusion cell studies using porcine skin. Stepwise regression analysis resulted in a QSAR employing two penetrant descriptors and one solvent property. The penetrant descriptors were octanol/water partition coefficient, logP and the ninth order path molecular connectivity index, and the solvent property was the difference between boiling and melting points. The negative relationship between skin permeability coefficient and logP was attributed to the fact that most of the drugs in this particular dataset are extremely lipophilic in comparison with the compounds in the common skin permeability datasets used in QSAR. The findings show that compounds formulated in vehicles with small boiling and melting point gaps will be expected to have higher permeation through skin. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.396. The chemical space of the dataset was compared with that of the known skin permeability datasets and gaps were identified for future skin permeability measurements. Copyright 2010 Elsevier B.V. All rights reserved.
A review of typical thermal fatigue failure models for solder joints of electronic components
Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong
2017-09-01
For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Experiment planning using high-level component models at W7-X
International Nuclear Information System (INIS)
Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred
2012-01-01
Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.
Variance-based Salt Body Reconstruction
Ovcharenko, Oleg
2017-05-26
Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.
A new model for reliability optimization of series-parallel systems with non-homogeneous components
International Nuclear Information System (INIS)
Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye
2017-01-01
In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.
Beyond the Mean: Sensitivities of the Variance of Population Growth.
Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad
2013-03-01
Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.
Diaz, S Anaid; Viney, Mark
2014-06-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
. In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...
Directory of Open Access Journals (Sweden)
Monika eFleischhauer
2013-09-01
Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to
Carlson, James E.
2014-01-01
Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…
A two-component dark matter model with real singlet scalars ...
Indian Academy of Sciences (India)
2016-01-05
component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble. Debasish Majumdar Kamakshya Prasad Modak Subhendu Rakshit. Special: Cosmology Volume 86 Issue ...
Model-Based Design Tools for Extending COTS Components To Extreme Environments, Phase II
National Aeronautics and Space Administration — The innovation in this project is model-based design (MBD) tools for predicting the performance and useful life of commercial-off-the-shelf (COTS) components and...
New approaches to the modelling of multi-component fuel droplet heating and evaporation
Sazhin, Sergei S; Elwardany, Ahmed E; Heikal, Morgan R
2015-01-01
numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model
Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data
Directory of Open Access Journals (Sweden)
Yasser M. Kadah
2010-01-01
Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.
Detailed finite element method modeling of evaporating multi-component droplets
Energy Technology Data Exchange (ETDEWEB)
Diddens, Christian, E-mail: C.Diddens@tue.nl
2017-07-01
The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.
DEFF Research Database (Denmark)
Janssen, Hans; Blocken, Bert; Carmeliet, Jan
2007-01-01
While the transfer equations for moisture and heat in building components are currently undergoing standardisation, atmospheric boundary conditions, conservative modelling and numerical efficiency are not addressed. In a first part, this paper adds a comprehensive description of those boundary...
A proposed centralised distribution model for the South African automotive component industry
Directory of Open Access Journals (Sweden)
Micheline J. Naude
2009-12-01
Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive
International Nuclear Information System (INIS)
Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.
2003-01-01
A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)
Revealing the equivalence of two clonal survival models by principal component analysis
International Nuclear Information System (INIS)
Lachet, Bernard; Dufour, Jacques
1976-01-01
The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr
A model-based software development methodology for high-end automotive components
Ravanan, Mahmoud
2014-01-01
This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,
Stability equation and two-component Eigenmode for domain walls in scalar potential model
International Nuclear Information System (INIS)
Dias, G.S.; Graca, E.L.; Rodrigues, R. de Lima
2002-08-01
Supersymmetric quantum mechanics involving a two-component representation and two-component eigenfunctions is applied to obtain the stability equation associated to a potential model formulated in terms of two coupled real scalar fields. We investigate the question of stability by introducing an operator technique for the Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS states on two domain walls in a scalar potential model with minimal N 1-supersymmetry. (author)
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
Seismic assessment and performance of nonstructural components affected by structural modeling
Energy Technology Data Exchange (ETDEWEB)
Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)
2017-03-15
Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante-Neto
2011-12-01
Full Text Available Objetivou-se comparar modelos de regressão aleatória com diferentes estruturas de variância residual, a fim de se buscar a melhor modelagem para a característica tamanho da leitegada ao nascer (TLN. Utilizaram-se 1.701 registros de TLN, que foram analisados por meio de modelo animal, unicaracterística, de regressão aleatória. As regressões fixa e aleatórias foram representadas por funções contínuas sobre a ordem de parto, ajustadas por polinômios ortogonais de Legendre de ordem 3. Para averiguar a melhor modelagem para a variância residual, considerou-se a heterogeneidade de variância por meio de 1 a 7 classes de variância residual. O modelo geral de análise incluiu grupo de contemporâneo como efeito fixo; os coeficientes de regressão fixa para modelar a trajetória média da população; os coeficientes de regressão aleatória do efeito genético aditivo-direto, do comum-de-leitegada e do de ambiente permanente de animal; e o efeito aleatório residual. O teste da razão de verossimilhança, o critério de informação de Akaike e o critério de informação bayesiano de Schwarz apontaram o modelo que considerou homogeneidade de variância como o que proporcionou melhor ajuste aos dados utilizados. As herdabilidades obtidas foram próximas a zero (0,002 a 0,006. O efeito de ambiente permanente foi crescente da 1ª (0,06 à 5ª (0,28 ordem, mas decrescente desse ponto até a 7ª ordem (0,18. O comum-de-leitegada apresentou valores baixos (0,01 a 0,02. A utilização de homogeneidade de variância residual foi mais adequada para modelar as variâncias associadas à característica tamanho da leitegada ao nascer nesse conjunto de dado.The objective of this work was to compare random regression models with different residual variance structures, so as to obtain the best modeling for the trait litter size at birth (LSB in swine. One thousand, seven hundred and one records of LSB were analyzed. LSB was analyzed by means of a
The n-component cubic model and flows: subgraph break-collapse method
International Nuclear Information System (INIS)
Essam, J.W.; Magalhaes, A.C.N. de.
1988-01-01
We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar
Energy Technology Data Exchange (ETDEWEB)
Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)
2017-10-15
We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)
Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors
Simpson, Daniel
2017-04-06
In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.
Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors
Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.
2017-01-01
In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.
A versatile omnibus test for detecting mean and variance heterogeneity.
Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.
Holmes, Tyson H; He, Xiao-Song
2016-10-01
Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1small, wide data sets. These prescriptions are distinctive in their especially heavy emphasis on minimizing the use of out-of-sample information for conducting statistical inference. This allows the working immunologist to proceed without being encumbered by imposed and often untestable statistical assumptions. Problems of unmeasured confounders, confidence-interval coverage, feature selection, and shrinkage/denoising are defined clearly and treated in detail. We propose an extension of an existing nonparametric technique for improved small-sample confidence-interval tail coverage from the univariate case (single immune feature) to the multivariate (many, possibly correlated immune features). An important role for derived features in the immunological interpretation of regression analyses is stressed. Areas of further research are discussed. Presented principles and methods are illustrated through application to a small, wide data set of adults spanning a wide range in ages and multiple immunophenotypes that were assayed before and after immunization with inactivated influenza vaccine (IIV). Our regression modeling prescriptions identify some potentially important topics for future immunological research. 1) Immunologists may wish to distinguish age-related differences in immune features from changes in immune features caused by aging. 2) A form of the bootstrap that employs linear extrapolation may prove to be an invaluable analytic tool because it allows the working immunologist to obtain accurate estimates of the stability of immune parameter estimates with a bare minimum of imposed assumptions. 3) Liberal inclusion of immune features in phenotyping panels can facilitate accurate separation of biological signal of interest from noise. In addition, through a combination of denoising and
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.
Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L
2016-11-05
The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-04-01
National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the
Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale
2013-06-01
A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.
OSCAR2000 : a multi-component 3-dimensional oil spill contingency and response model
International Nuclear Information System (INIS)
Reed, M.; Daling, P.S.; Brakstad, O.G.; Singsaas, I.; Faksness, L.-G.; Hetland, B.; Ekrol, N.
2000-01-01
Researchers at SINTEF in Norway have studied the weathering of surface oil. They developed a realistic model to analyze alternative spill response strategies. The model represented the formation and composition of the water-accommodated fraction (WAF) of oil for both treated and untreated oil spills. As many as 25 components, pseudo-components, or metabolites were allowed for the specification of oil. Calculations effected using OSCAR were verified in great detail on numerous occasions. The model made it possible to determine rather realistically the dissolution, transformation, and toxicology of dispersed oil clouds, as well as evaporation, emulsification, and natural dispersion. OSCAR comprised a data-based oil weathering model, a three-dimensional oil trajectory and chemical fates model, an oil spill combat model, exposure models for birds, marine mammals, fish and ichthyoplankton. 17 refs., 1 tab., 11 figs
DEFF Research Database (Denmark)
Stamatelos, Dimtrios; Kappatos, Vassilios
2017-01-01
Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...
Model of the fine-grain component of martian soil based on Viking lander data
International Nuclear Information System (INIS)
Nussinov, M.D.; Chernyak, Y.B.; Ettinger, J.L.
1978-01-01
A model of the fine-grain component of the Martian soil is proposed. The model is based on well-known physical phenomena, and enables an explanation of the evolution of the gases released in the GEX (gas exchange experiments) and GCMS (gas chromatography-mass spectrometer experiments) of the Viking landers. (author)
Van Mechelen, Iven; Kiers, Henk A.L.
1999-01-01
The three-mode component analysis model is discussed as a tool for a contextualized study of personality. When applied to person x situation x response data, the model includes sets of latent dimensions for persons, situations, and responses as well as a so-called core array, which may be considered
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
Hybrid biasing approaches for global variance reduction
International Nuclear Information System (INIS)
Wu, Zeyun; Abdel-Khalik, Hany S.
2013-01-01
A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
International Nuclear Information System (INIS)
Meisner, Aaron M.; Finkbeiner, Douglas P.
2015-01-01
We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales
International Nuclear Information System (INIS)
Gholinezhad, Hadi; Zeinal Hamadani, Ali
2017-01-01
This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.
Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T
2014-01-01
Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
Taneja, Vidya S.
1996-01-01
In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.
Layout Optimization Model for the Production Planning of Precast Concrete Building Components
Directory of Open Access Journals (Sweden)
Dong Wang
2018-05-01
Full Text Available Precast concrete comprises the basic components of modular buildings. The efficiency of precast concrete building component production directly impacts the construction time and cost. In the processes of precast component production, mold setting has a significant influence on the production efficiency and cost, as well as reducing the resource consumption. However, the development of mold setting plans is left to the experience of production staff, with outcomes dependent on the quality of human skill and experience available. This can result in sub-optimal production efficiencies and resource wastage. Accordingly, in order to improve the efficiency of precast component production, this paper proposes an optimization model able to maximize the average utilization rate of pallets used during the molding process. The constraints considered were the order demand, the size of the pallet, layout methods, and the positional relationship of components. A heuristic algorithm was used to identify optimization solutions provided by the model. Through empirical analysis, and as exemplified in the case study, this research is significant in offering a prefabrication production planning model which improves pallet utilization rates, shortens component production time, reduces production costs, and improves the resource utilization. The results clearly demonstrate that the proposed method can facilitate the precast production plan providing strong practical implications for production planners.
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity
Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components
Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio
1997-01-01
In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.
SASSYS-1 balance-of-plant component models for an integrated plant response
International Nuclear Information System (INIS)
Ku, J.-Y.
1989-01-01
Models of power plant heat transfer components and rotating machinery have been added to the balance-of-plant model in the SASSYS-1 liquid metal reactor systems analysis code. This work is part of a continuing effort in plant network simulation based on the general mathematical models developed. The models described in this paper extend the scope of the balance-of-plant model to handle non-adiabatic conditions along flow paths. While the mass and momentum equations remain the same, the energy equation now contains a heat source term due to energy transfer across the flow boundary or to work done through a shaft. The heat source term is treated fully explicitly. In addition, the equation of state is rewritten in terms of the quality and separate parameters for each phase. The models are simple enough to run quickly, yet include sufficient detail of dominant plant component characteristics to provide accurate results. 5 refs., 16 figs., 2 tabs
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
Phenotypic variance explained by local ancestry in admixed African Americans.
Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N
2015-01-01
We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.
Low-level profiling and MARTE-compatible modeling of software components for real-time systems
Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.
2012-01-01
In this paper, we present a method for (a) profiling of individual components at high accuracy level, (b) modeling of the components with the accurate data obtained from profiling, and (c) model conversion to the MARTE profile. The resulting performance models of individual components are used at
Evolution of Genetic Variance during Adaptive Radiation.
Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel
2018-04-01
Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.
International Nuclear Information System (INIS)
Byun, Choong Sup; Song, Dong Soo; Jun, Hwang Yong
2006-01-01
In a design point of view, component cooling water (CCW) system is not full-interactively designed with its heat loads. Heat loads are calculated from the CCW design flow and temperature condition which is determined with conservatism. Then the CCW heat exchanger is sized by using total maximized heat loads from above calculation. This approach does not give the optimized performance results and the exact trends of CCW system and the loads during transient. Therefore a combined model for performance analysis of containment and the component cooling water(CCW) system is developed by using GOTHIC software code. The model is verified by using the design parameters of component cooling water heat exchanger and the heat loads during the recirculation mode of loss of coolant accident scenario. This model may be used for calculating the realistic containment response and CCW performance, and increasing the ultimate heat sink temperature limits
A Component-Based Modeling and Validation Method for PLC Systems
Directory of Open Access Journals (Sweden)
Rui Wang
2014-05-01
Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.
Directory of Open Access Journals (Sweden)
Victor Yurievich Stroganov
2017-02-01
Full Text Available This article contains the systematization of the major production functions of repair activities network and the list of planning and control functions, which are described in the form of business processes (BP. Simulation model for analysis of the delivery effectiveness of components under conditions of probabilistic uncertainty was proposed. It has been shown that a significant portion of the total number of business processes is represented by the management and planning of the parts and components movement. Questions of construction of experimental design techniques on the simulation model in the conditions of non-stationarity were considered.
Towards a Complete Model for Software Component Deployment on Heterogeneous Platform
Directory of Open Access Journals (Sweden)
Švogor Ivan
2014-12-01
Full Text Available This report briefly describes an ongoing research related to optimization of allocating software components to heterogeneous computing platform (which includes CPU, GPU and FPGA. Research goal is also presented, along with current hot topics of the research area, related research teams, and finally results and contribution of my research. It involves mathematical modelling which results in goal function, optimization method which finds a suboptimal solution to the goal function and a software modeling tool which enables graphical representation of the problem at hand and help developers determine component placement in the system design phase.
Analysis of Variance in Statistical Image Processing
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
International Nuclear Information System (INIS)
Brown, T.A.; Gillespie, G.H.
1999-01-01
Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the computer code TRACE 3-D. TRACE 3-D is an envelope (matrix) code, which includes a linear space charge model, that was originally developed to model bunched beams in magnetic transport systems and radiofrequency (RF) accelerators. Several new optical models for a number of electrostatic lenses and accelerator columns have been developed recently that allow the code to be used for modeling beamlines and accelerators with electrostatic components. The new models include a number of options for: (1) Einzel lenses, (2) accelerator columns, (3) electrostatic prisms, and (4) electrostatic quadrupoles. A prescription for setting up the initial beam appropriate to modeling 2-D (continuous) beams has also been developed. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the modeling of cylindrical, spherical, and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low energy beamline at CAMS
Studying Variance in the Galactic Ultra-compact Binary Population
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Component Degradation Susceptibilities As The Bases For Modeling Reactor Aging Risk
International Nuclear Information System (INIS)
Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.
2010-01-01
The extension of nuclear power plant operating licenses beyond 60 years in the United States will be necessary if we are to meet national energy needs while addressing the issues of carbon and climate. Characterizing the operating risks associated with aging reactors is problematic because the principal tool for risk-informed decision-making, Probabilistic Risk Assessment (PRA), is not ideally-suited to addressing aging systems. The components most likely to drive risk in an aging reactor - the passives - receive limited treatment in PRA, and furthermore, standard PRA methods are based on the assumption of stationary failure rates: a condition unlikely to be met in an aging system. A critical barrier to modeling passives aging on the wide scale required for a PRA is that there is seldom sufficient field data to populate parametric failure models, and nor is there the availability of practical physics models to predict out-year component reliability. The methodology described here circumvents some of these data and modeling needs by using materials degradation metrics, integrated with conventional PRA models, to produce risk importance measures for specific aging mechanisms and component types. We suggest that these measures have multiple applications, from the risk-screening of components to the prioritization of materials research.
Pennel, Cara L; Burdine, James N; Prochaska, John D; McLeroy, Kenneth R
Community health assessment and community health improvement planning are continuous, systematic processes for assessing and addressing health needs in a community. Since there are different models to guide assessment and planning, as well as a variety of organizations and agencies that carry out these activities, there may be confusion in choosing among approaches. By examining the various components of the different assessment and planning models, we are able to identify areas for coordination, ways to maximize collaboration, and strategies to further improve community health. We identified 11 common assessment and planning components across 18 models and requirements, with a particular focus on health department, health system, and hospital models and requirements. These common components included preplanning; developing partnerships; developing vision and scope; collecting, analyzing, and interpreting data; identifying community assets; identifying priorities; developing and implementing an intervention plan; developing and implementing an evaluation plan; communicating and receiving feedback on the assessment findings and/or the plan; planning for sustainability; and celebrating success. Within several of these components, we discuss characteristics that are critical to improving community health. Practice implications include better understanding of different models and requirements by health departments, hospitals, and others involved in assessment and planning to improve cross-sector collaboration, collective impact, and community health. In addition, federal and state policy and accreditation requirements may be revised or implemented to better facilitate assessment and planning collaboration between health departments, hospitals, and others for the purpose of improving community health.
No-migration variance petition
International Nuclear Information System (INIS)
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program
Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.
Zapko-Willmes, Alexandra; Kandler, Christian
2018-01-01
The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Ma, Hui-qiang
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
International Nuclear Information System (INIS)
Fang Zheng; Zhang Quanru
2006-01-01
A model has been derived to predict thermodynamic properties of ternary metallic systems from those of its three binaries. In the model, the excess Gibbs free energies and the interaction parameter ω 123 for three components of a ternary are expressed as a simple sum of those of the three sub-binaries, and the mole fractions of the components of the ternary are identical with the sub-binaries. This model is greatly simplified compared with the current symmetrical and asymmetrical models. It is able to overcome some shortcomings of the current models, such as the arrangement of the components in the Gibbs triangle, the conversion of mole fractions between ternary and corresponding binaries, and some necessary processes for optimizing the various parameters of these models. Two ternary systems, Mg-Cu-Ni and Cd-Bi-Pb are recalculated to demonstrate the validity and precision of the present model. The calculated results on the Mg-Cu-Ni system are better than those in the literature. New parameters in the Margules equations expressing the excess Gibbs free energies of three binary systems of the Cd-Bi-Pb ternary system are also given
Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model
Energy Technology Data Exchange (ETDEWEB)
Fok, Alex
2013-10-30
The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.
DEFF Research Database (Denmark)
Ravn, Bjarne Gottlieb; Andersen, Claus Bo; Wanheim, Tarras
2001-01-01
There are three demands on a component that must undergo a die-cavity elasticity analysis. The demands to the product are specified as: (i) to be able to measure the loading profile which results in elestic die-cavity deflections; (ii) to be able to compute the elestic deflections using FE; (iii...
Towards the ultimate variance-conserving convection scheme
International Nuclear Information System (INIS)
Os, J.J.A.M. van; Uittenbogaard, R.E.
2004-01-01
In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287
Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2013-01-01
configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....
Refinement and verification in component-based model-driven design
DEFF Research Database (Denmark)
Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter
2009-01-01
Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...
International Nuclear Information System (INIS)
Sengupta, S.K.; Boyle, J.S.
1993-05-01
Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution
Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components
International Nuclear Information System (INIS)
Volkov, V A; Gordeev, D A; Ivanov, S I; Lavrov, A P; Saenko, I I
2016-01-01
The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates. (paper)
International Nuclear Information System (INIS)
Brown, T.A.; Gillespie, G.H.
2000-01-01
Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the envelope (matrix) computer code TRACE 3-D as a part of the development of a suite of electrostatic beamline element models which includes lenses, acceleration columns, quadrupoles and prisms. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the first-order modeling of cylindrical, spherical and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low-energy beamline at the Center for Accelerator Mass Spectrometry. Although initial tests following installation of the new beamline showed that the new spherical electrostatic analyzer was not behaving as predicted by these first-order models, operational conditions were found under which the analyzer now works properly as a double-focusing spherical electrostatic prism
Pricing perpetual American options under multiscale stochastic elasticity of variance
International Nuclear Information System (INIS)
Yoon, Ji-Hun
2015-01-01
Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk
A review of multi-component maintenance models with economic dependence
R. Dekker (Rommert); R.E. Wildeman (Ralph); F.A. van der Duyn Schouten (Frank)
1997-01-01
textabstractIn this paper we review the literature on multi-component maintenance models with economic dependence. The emphasis is on papers that appeared after 1991, but there is an overlap with Section 2 of the most recent review paper by Cho and Parlar (1991). We distinguish between stationary
Specification and Generation of Environment for Model Checking of Software Components
Czech Academy of Sciences Publication Activity Database
Pařízek, P.; Plášil, František
2007-01-01
Roč. 176, - (2007), s. 143-154 ISSN 1571-0661 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * behavior protocols * model checking * automated generation of environment Subject RIV: JC - Computer Hardware ; Software
Helpful Components Involved in the Cognitive-Experiential Model of Dream Work
Tien, Hsiu-Lan Shelley; Chen, Shuh-Chi; Lin, Chia-Huei
2009-01-01
The purpose of the study was to examine the helpful components involved in the Hill's cognitive-experiential dream work model. Participants were 27 volunteer clients from colleges and universities in northern and central parts of Taiwan. Each of the clients received 1-2 sessions of dream interpretations. The cognitive-experiential dream work model…
A Bayesian analysis of the PPP puzzle using an unobserved components model
R.H. Kleijn (Richard); H.K. van Dijk (Herman)
2001-01-01
textabstractThe failure to describe the time series behaviour of most real exchange rates as temporary deviations from fixed long-term means may be due to time variation of the equilibria themselves, see Engel (2000). We implement this idea using an unobserved components model and decompose the
Passively model-locked Nd: YAG laser with a component GaAs
International Nuclear Information System (INIS)
Zhang Zhuhong; Qian Liejia; Chen Shaohe; Fan Dianyuan; Mao Hongwei
1992-01-01
An all solid-state passively mode-locked Nd: YAG laser with a 400 μm, (100) oriented GaAs component is reported for the first time and model locked pulses with a duration of 16 ps, average energy of 10 μJ were obtained with a probability of 90%
Kou, Jisheng; Sun, Shuyu
2017-01-01
A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive
A two-component dark matter model with real singlet scalars ...
Indian Academy of Sciences (India)
Theoretical framework. In the present work, the dark matter candidate has two components S and S′ both of ... The scalar sector potential (for Higgs and two real singlet scalars) in this framework can then be written .... In this work we obtain the allowed values of model parameters (δ2, δ′2, MS and M′S) using three direct ...
Ontologies to Support RFID-Based Link between Virtual Models and Construction Components
DEFF Research Database (Denmark)
Sørensen, Kristian Birch; Christiansson, Per; Svidt, Kjeld
2010-01-01
the virtual models and the physical components in the construction process can improve the information handling and sharing in construction and building operation management. Such a link can be created by means of Radio Frequency Identification (RFID) technology. Ontologies play an important role...
Correlation inequalities for two-component hypercubic φ4 models. Pt. 2
International Nuclear Information System (INIS)
Soria, J.L.; Instituto Tecnologico de Tijuana
1990-01-01
We continue the program started in the first paper (J. Stat. Phys. 52 (1988) 711-726). We find new and already known correlation inequalities for a family of two-component hypercubic φ 4 models, using techniques of rotated correlation inequalities and random walk representation. (orig.)
Hontelez, J.A.M.; Wijnmalen, D.J.D.
1993-01-01
We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so